反向传导算法

From Ufldl

Jump to: navigation, search
(Created page with "反向传导算法")
Line 1: Line 1:
-
反向传导算法
+
:【原文】:
 +
Suppose we have a fixed training set <math>\{ (x^{(1)}, y^{(1)}), \ldots, (x^{(m)}, y^{(m)}) \}</math> of <math>m</math> training examples. We can train our neural network using batch gradient descent.  In detail, for a single training example <math>(x,y)</math>, we define the cost function with respect to that single example to be:
 +
 
 +
:【初译】:
 +
假设我们有一个固定的训练集<math>\{ (x^{(1)}, y^{(1)}), \ldots, (x^{(m)}, y^{(m)}) \}</math>,它包含<math>m</math>个训练样本。我们可以用批量梯度下降法训练我们的神经网络。下面进行详细介绍。针对单独的训练样本<math>(x,y)</math>,我们定义关于它的代价函数为:
 +
:【一校】:
 +
 
 +
:<math>
 +
\begin{align}
 +
J(W,b; x,y) = \frac{1}{2} \left\| h_{W,b}(x) - y \right\|^2.
 +
\end{align}
 +
</math>
 +
 
 +
:【原文】:
 +
:【初译】:
 +
:【一校】:

Revision as of 15:41, 7 March 2013

Personal tools