反向传导算法

From Ufldl

Revision as of 15:41, 7 March 2013 by Kandeng (Talk | contribs)
Jump to: navigation, search
【原文】:

Suppose we have a fixed training set \{ (x^{(1)}, y^{(1)}), \ldots, (x^{(m)}, y^{(m)}) \} of m training examples. We can train our neural network using batch gradient descent. In detail, for a single training example (x,y), we define the cost function with respect to that single example to be:

【初译】:

假设我们有一个固定的训练集\{ (x^{(1)}, y^{(1)}), \ldots, (x^{(m)}, y^{(m)}) \},它包含m个训练样本。我们可以用批量梯度下降法训练我们的神经网络。下面进行详细介绍。针对单独的训练样本(x,y),我们定义关于它的代价函数为:

【一校】:

\begin{align}
J(W,b; x,y) = \frac{1}{2} \left\| h_{W,b}(x) - y \right\|^2.
\end{align}
【原文】:
【初译】:
【一校】:
Personal tools