Backpropagation Algorithm

From Ufldl

Jump to: navigation, search
m
 
Line 1: Line 1:
-
Suppose we have a fixed training
+
Suppose we have a fixed training set <math>\{ (x^{(1)}, y^{(1)}), \ldots, (x^{(m)}, y^{(m)}) \}</math> of <math>m</math> training examples. We can train our neural network using batch gradient descent.  In detail, for a single training example <math>(x,y)</math>, we define the cost function with respect to that single example to be:
-
set <math>\{ (x^{(1)}, y^{(1)}), \ldots, (x^{(m)}, y^{(m)}) \}</math> of <math>m</math> training examples.
+
:<math>
-
We can train our neural network using batch gradient descent.  In detail, for
+
\begin{align}
-
a single training example <math>(x,y)</math>, we define the cost function with respect to that
+
-
single example to be
+
-
:<math>\begin{align}
+
J(W,b; x,y) = \frac{1}{2} \left\| h_{W,b}(x) - y \right\|^2.
J(W,b; x,y) = \frac{1}{2} \left\| h_{W,b}(x) - y \right\|^2.
-
\end{align}</math>
+
\end{align}
-
This is a (one-half) squared-error cost function.
+
</math>
-
Given a training set of <math>m</math> examples, we then define the overall cost function to be
+
This is a (one-half) squared-error cost function. Given a training set of <math>m</math> examples, we then define the overall cost function to be:
-
:<math>\begin{align}
+
:<math>
 +
\begin{align}
J(W,b)
J(W,b)
&= \left[ \frac{1}{m} \sum_{i=1}^m J(W,b;x^{(i)},y^{(i)}) \right]
&= \left[ \frac{1}{m} \sum_{i=1}^m J(W,b;x^{(i)},y^{(i)}) \right]
Line 16: Line 14:
&= \left[ \frac{1}{m} \sum_{i=1}^m \left( \frac{1}{2} \left\| h_{W,b}(x^{(i)}) - y^{(i)} \right\|^2 \right) \right]
&= \left[ \frac{1}{m} \sum_{i=1}^m \left( \frac{1}{2} \left\| h_{W,b}(x^{(i)}) - y^{(i)} \right\|^2 \right) \right]
                       + \frac{\lambda}{2} \sum_{l=1}^{n_l-1} \; \sum_{i=1}^{s_l} \; \sum_{j=1}^{s_{l+1}} \left( W^{(l)}_{ji} \right)^2
                       + \frac{\lambda}{2} \sum_{l=1}^{n_l-1} \; \sum_{i=1}^{s_l} \; \sum_{j=1}^{s_{l+1}} \left( W^{(l)}_{ji} \right)^2
-
\end{align}</math>
+
\end{align}
-
The first term in the definition of <math>J(W,b)</math> is an average sum-of-squares error term.
+
</math>
-
The second term is a regularization
+
-
term (also called a '''weight decay''' term) that tends to decrease the magnitude of the weights,
+
-
and helps prevent overfitting.
+
-
[Note: Usually weight decay is not applied to the bias terms <math>b^{(l)}_i</math>,
+
The first term in the definition of <math>J(W,b)</math> is an average sum-of-squares error term. The second term is a regularization term (also called a '''weight decay''' term) that tends to decrease the magnitude of the weights, and helps prevent overfitting.
-
as reflected in our definition for <math>J(W, b)</math>. Applying weight decay
+
-
to the bias units usually makes only a small different to the final network,
+
-
however.  If you took CS229, you may also recognize weight decay this as
+
-
essentially a variant of the Bayesian regularization method you saw there,
+
-
where we placed a Gaussian prior on the parameters and did MAP (instead of
+
-
maximum likelihood) estimation.]
+
 +
[Note: Usually weight decay is not applied to the bias terms <math>b^{(l)}_i</math>, as reflected in our definition for <math>J(W, b)</math>.  Applying weight decay to the bias units usually makes only a small difference to the final network, however.  If you've taken CS229 (Machine Learning) at Stanford or watched the course's videos on YouTube, you may also recognize this weight decay as essentially a variant of the Bayesian regularization method you saw there, where we placed a Gaussian prior on the parameters and did MAP (instead of maximum likelihood) estimation.]
-
The '''weight decay parameter''' <math>\lambda</math> controls the relative importance
+
The '''weight decay parameter''' <math>\lambda</math> controls the relative importance of the two terms. Note also the slightly overloaded notation: <math>J(W,b;x,y)</math> is the squared error cost with respect to a single example; <math>J(W,b)</math> is the overall cost function, which includes the weight decay term.
-
of the two terms. Note also the slightly overloaded notation: <math>J(W,b;x,y)</math> is the squared error cost
+
-
with respect to a single example; <math>J(W,b)</math> is the overall cost function, which includes the weight decay term.
+
-
This cost function above is often used both for classification and for regression problems. For classification,
+
This cost function above is often used both for classification and for regression problems. For classification, we let <math>y = 0</math> or <math>1</math> represent the two class labels (recall that the sigmoid activation function outputs values in <math>[0,1]</math>; if we were using a tanh activation function, we would instead use -1 and +1 to denote the labels).  For regression problems, we first scale our outputs to ensure that they lie in the <math>[0,1]</math> range (or if we were using a tanh activation function, then the <math>[-1,1]</math> range).
-
we let <math>y = 0</math> or <math>1</math> represent the two class labels (recall that the sigmoid activation function
+
-
outputs values in <math>[0,1]</math>; if we were using a tanh activation function, we would instead use -1 and +1 to denote
+
-
the labels).  For regression problems, we first scale our outputs to ensure that they lie
+
-
in the <math>[0,1]</math> range (or if we were using a tanh activation function, then the <math>[-1,1]</math> range).
+
-
Our goal is to minimize <math>J(W,b)</math> as a function of <math>W</math> and <math>b</math>.
+
Our goal is to minimize <math>J(W,b)</math> as a function of <math>W</math> and <math>b</math>. To train our neural network, we will initialize each parameter <math>W^{(l)}_{ij}</math> and each <math>b^{(l)}_i</math> to a small random value near zero (say according to a <math>{Normal}(0,\epsilon^2)</math> distribution for some small <math>\epsilon</math>, say <math>0.01</math>), and then apply an optimization algorithm such as batch gradient descent. Since <math>J(W, b)</math> is a non-convex function,
-
To train our neural network, we will initialize each parameter <math>W^{(l)}_{ij}</math> and each <math>b^{(l)}_i</math> to a small random value near
+
-
zero (say according to a <math>{Normal}(0,\epsilon^2)</math> distribution for some small <math>\epsilon</math>, say <math>0.01</math>),
+
-
and then apply an optimization algorithm such as batch gradient descent. Since <math>J(W, b)</math> is a non-convex function,
+
gradient descent is susceptible to local optima; however, in practice gradient descent
gradient descent is susceptible to local optima; however, in practice gradient descent
-
usually works fairly well.
+
usually works fairly well. Finally, note that it is important to initialize
-
Finally, note that it is important to initialize
+
the parameters randomly, rather than to all 0's.  If all the parameters start off
the parameters randomly, rather than to all 0's.  If all the parameters start off
at identical values, then all the hidden layer units will end up learning the same
at identical values, then all the hidden layer units will end up learning the same
-
function of the input (more formally, <math>W^{(1)}_{ij}</math> will be the same for all values of <math>i</math>, so that
+
function of the input (more formally, <math>W^{(1)}_{ij}</math> will be the same for all values of <math>i</math>, so that <math>a^{(2)}_1 = a^{(2)}_2 = a^{(2)}_3 = \ldots</math> for any input <math>x</math>). The random initialization serves the purpose of '''symmetry breaking'''.
-
<math>a^{(2)}_1 = a^{(2)}_2 = a^{(2)}_3 = \ldots</math> for any input <math>x</math>).
+
-
The random initialization serves the purpose of
+
-
'''symmetry breaking'''.
+
One iteration of gradient descent updates the parameters <math>W,b</math> as follows:
One iteration of gradient descent updates the parameters <math>W,b</math> as follows:
-
:<math>\begin{align}
+
:<math>
 +
\begin{align}
W_{ij}^{(l)} &= W_{ij}^{(l)} - \alpha \frac{\partial}{\partial W_{ij}^{(l)}} J(W,b) \\
W_{ij}^{(l)} &= W_{ij}^{(l)} - \alpha \frac{\partial}{\partial W_{ij}^{(l)}} J(W,b) \\
b_{i}^{(l)} &= b_{i}^{(l)} - \alpha \frac{\partial}{\partial b_{i}^{(l)}} J(W,b)
b_{i}^{(l)} &= b_{i}^{(l)} - \alpha \frac{\partial}{\partial b_{i}^{(l)}} J(W,b)
-
\end{align}</math>
+
\end{align}
-
where <math>\alpha</math> is the learning rate.  The key step is computing the partial derivatives
+
</math>
-
above.
+
where <math>\alpha</math> is the learning rate.  The key step is computing the partial derivatives above. We will now describe the '''backpropagation''' algorithm, which gives an
-
We will now describe the '''backpropagation''' algorithm, which gives an
+
efficient way to compute these partial derivatives.
efficient way to compute these partial derivatives.
-
 
+
We will first describe how backpropagation can be used to compute <math>\textstyle \frac{\partial}{\partial W_{ij}^{(l)}} J(W,b; x, y)</math> and <math>\textstyle \frac{\partial}{\partial b_{i}^{(l)}} J(W,b; x, y)</math>, the partial derivatives of the cost function <math>J(W,b;x,y)</math> defined with respect to a single example <math>(x,y)</math>. Once we can compute these, we see that the derivative of the overall cost function <math>J(W,b)</math> can be computed as:
-
We will first describe how backpropagation can be used to
+
:<math>
-
compute <math>\textstyle \frac{\partial}{\partial W_{ij}^{(l)}} J(W,b; x, y)</math> and
+
\begin{align}
-
<math>\textstyle \frac{\partial}{\partial b_{i}^{(l)}} J(W,b; x, y)</math>, the
+
-
partial derivatives of the cost function <math>J(W,b;x,y)</math> defined with respect
+
-
to a single example <math>(x,y)</math>.
+
-
Once we can compute these, we see that
+
-
the derivative of the overall cost function <math>J(W,b)</math> can be computed as
+
-
:<math>\begin{align}
+
\frac{\partial}{\partial W_{ij}^{(l)}} J(W,b) &=
\frac{\partial}{\partial W_{ij}^{(l)}} J(W,b) &=
-
\left[ \frac{1}{m} \sum_{i=1}^m \frac{\partial}{\partial W_{ij}^{(l)}} J(W,b; x^{(i)}, y^{(i)}) \right] + \lambda W_{ij}^{(l)}, \\
+
\left[ \frac{1}{m} \sum_{i=1}^m \frac{\partial}{\partial W_{ij}^{(l)}} J(W,b; x^{(i)}, y^{(i)}) \right] + \lambda W_{ij}^{(l)} \\
\frac{\partial}{\partial b_{i}^{(l)}} J(W,b) &=
\frac{\partial}{\partial b_{i}^{(l)}} J(W,b) &=
-
\frac{1}{m}\sum_{i=1}^m \frac{\partial}{\partial b_{i}^{(l)}} J(W,b; x^{(i)}, y^{(i)}).
+
\frac{1}{m}\sum_{i=1}^m \frac{\partial}{\partial b_{i}^{(l)}} J(W,b; x^{(i)}, y^{(i)})
-
\end{align}</math>
+
\end{align}
 +
</math>
The two lines above differ slightly because weight decay is applied to <math>W</math> but not <math>b</math>.
The two lines above differ slightly because weight decay is applied to <math>W</math> but not <math>b</math>.
-
The intuition behind the backpropagation algorithm is as follows.
+
The intuition behind the backpropagation algorithm is as follows. Given a training example <math>(x,y)</math>, we will first run a "forward pass" to compute all the activations throughout the network, including the output value of the hypothesis <math>h_{W,b}(x)</math>.  Then, for each node <math>i</math> in layer <math>l</math>, we would like to compute an "error term" <math>\delta^{(l)}_i</math> that measures how much that node was "responsible" for any errors in our output. For an output node, we can directly measure the difference between the network's activation and the true target value, and use that to define <math>\delta^{(n_l)}_i</math> (where layer <math>n_l</math> is the output layer).  How about hidden units?  For those, we will compute <math>\delta^{(l)}_i</math> based on a weighted average of the error terms of the nodes that uses <math>a^{(l)}_i</math> as an input.  In detail, here is the backpropagation algorithm:
-
Given a training example <math>(x,y)</math>, we will first run a ``forward pass'' to
+
-
compute all the activations throughout the network, including the output value
+
-
of the hypothesis <math>h_{W,b}(x)</math>.  Then, for each node <math>i</math> in layer <math>l</math>, we would like
+
-
to compute an ``error term'' <math>\delta^{(l)}_i</math> that measures how much that node was
+
-
``responsible'' for any errors in our output.
+
-
For an output node, we can directly measure the difference between the
+
-
network's activation and the true target value, and use that to define
+
-
<math>\delta^{(n_l)}_i</math> (where layer <math>n_l</math> is the output layer).  How about hidden
+
-
units?  For those, we will compute <math>\delta^{(l)}_i</math> based on a weighted average
+
-
of the error terms of the nodes that uses <math>a^{(l)}_i</math> as an input.  In detail,
+
-
here is the backpropagation algorithm:
+
-
:1. Perform a feedforward pass, computing the activations for layers <math>L_2</math>, <math>L_3</math>, and so on up to the output layer <math>L_{n_l}</math>.
+
<ol>
-
:2. For each output unit <math>i</math> in layer <math>n_l</math> (the output layer), set
+
<li>Perform a feedforward pass, computing the activations for layers <math>L_2</math>, <math>L_3</math>, and so on up to the output layer <math>L_{n_l}</math>.
-
::<math>\begin{align}
+
<li>For each output unit <math>i</math> in layer <math>n_l</math> (the output layer), set
 +
:<math>
 +
\begin{align}
\delta^{(n_l)}_i
\delta^{(n_l)}_i
= \frac{\partial}{\partial z^{(n_l)}_i} \;\;
= \frac{\partial}{\partial z^{(n_l)}_i} \;\;
         \frac{1}{2} \left\|y - h_{W,b}(x)\right\|^2 = - (y_i - a^{(n_l)}_i) \cdot f'(z^{(n_l)}_i)
         \frac{1}{2} \left\|y - h_{W,b}(x)\right\|^2 = - (y_i - a^{(n_l)}_i) \cdot f'(z^{(n_l)}_i)
-
\end{align}</math>
+
\end{align}
-
:3. For <math>l = n_l-1, n_l-2, n_l-3, \ldots, 2</math>  
+
</math>
-
:: For each node <math>i</math> in layer <math>l</math>, set
+
<li>For <math>l = n_l-1, n_l-2, n_l-3, \ldots, 2</math>  
-
:::<math>
+
:For each node <math>i</math> in layer <math>l</math>, set
 +
::<math>
                 \delta^{(l)}_i = \left( \sum_{j=1}^{s_{l+1}} W^{(l)}_{ji} \delta^{(l+1)}_j \right) f'(z^{(l)}_i)
                 \delta^{(l)}_i = \left( \sum_{j=1}^{s_{l+1}} W^{(l)}_{ji} \delta^{(l+1)}_j \right) f'(z^{(l)}_i)
                 </math>
                 </math>
-
:4. Compute the desired partial derivatives, which are given as:  
+
<li>Compute the desired partial derivatives, which are given as:  
-
::<math>\begin{align}
+
:<math>
 +
\begin{align}
\frac{\partial}{\partial W_{ij}^{(l)}} J(W,b; x, y) &= a^{(l)}_j \delta_i^{(l+1)} \\
\frac{\partial}{\partial W_{ij}^{(l)}} J(W,b; x, y) &= a^{(l)}_j \delta_i^{(l+1)} \\
\frac{\partial}{\partial b_{i}^{(l)}} J(W,b; x, y) &= \delta_i^{(l+1)}.
\frac{\partial}{\partial b_{i}^{(l)}} J(W,b; x, y) &= \delta_i^{(l+1)}.
-
\end{align}</math>
+
\end{align}
 +
</math>
 +
</ol>
 +
 
 +
Finally, we can also re-write the algorithm using matrix-vectorial notation. We will use "<math>\textstyle \bullet</math>" to denote the element-wise product operator (denoted "<tt>.*</tt>" in Matlab or Octave, and also called the Hadamard product), so that if <math>\textstyle a = b \bullet c</math>, then <math>\textstyle a_i = b_ic_i</math>. Similar to how we extended the definition of <math>\textstyle f(\cdot)</math> to apply element-wise to vectors, we also do the same for <math>\textstyle f'(\cdot)</math> (so that <math>\textstyle f'([z_1, z_2, z_3]) =
 +
[f'(z_1),
 +
f'(z_2),
 +
f'(z_3)]</math>).
-
Finally, we can also re-write
 
-
the algorithm using matrix-vectorial notation.
 
-
We will use "<math>\textstyle \bullet</math>" to denote the element-wise product
 
-
operator (denoted ``{\tt .*}'' in Matlab or Octave, and also called the Hadamard product),
 
-
so
 
-
that if <math>\textstyle a = b \bullet c</math>, then <math>\textstyle a_i = b_ic_i</math>.
 
-
Similar to how we extended the definition of <math>\textstyle f(\cdot)</math> to apply element-wise to vectors, we also
 
-
do the same for <math>\textstyle f'(\cdot)</math> (so that <math>\textstyle f'([z_1, z_2, z_3]) =
 
-
[\frac{\partial}{\partial z_1} f(z_1),
 
-
\frac{\partial}{\partial z_2} f(z_2),
 
-
\frac{\partial}{\partial z_3} f(z_3)]</math>).
 
The algorithm can then be written:
The algorithm can then be written:
-
: 1. Perform a feedforward pass, computing the activations for layers <math>\textstyle L_2</math>, <math>\textstyle L_3</math>, up to the output layer <math>\textstyle L_{n_l}</math>, using the equations defining the forward propagation steps.
+
<ol>
-
: 2. For the output layer (layer <math>\textstyle n_l</math>), set  
+
<li>Perform a feedforward pass, computing the activations for layers <math>\textstyle L_2</math>, <math>\textstyle L_3</math>, up to the output layer <math>\textstyle L_{n_l}</math>, using the equations defining the forward propagation steps
-
::<math>\begin{align}
+
<li>For the output layer (layer <math>\textstyle n_l</math>), set  
 +
:<math>\begin{align}
\delta^{(n_l)}
\delta^{(n_l)}
-
= - (y - a^{(n_l)}) \bullet f'(z^{(n)})
+
= - (y - a^{(n_l)}) \bullet f'(z^{(n_l)})
\end{align}</math>
\end{align}</math>
-
: 3. For <math>\textstyle l = n_l-1, n_l-2, n_l-3, \ldots, 2</math>  
+
<li>For <math>\textstyle l = n_l-1, n_l-2, n_l-3, \ldots, 2</math>  
-
::Set
+
:Set
-
:::<math>\begin{align}
+
::<math>\begin{align}
                 \delta^{(l)} = \left((W^{(l)})^T \delta^{(l+1)}\right) \bullet f'(z^{(l)})
                 \delta^{(l)} = \left((W^{(l)})^T \delta^{(l+1)}\right) \bullet f'(z^{(l)})
                 \end{align}</math>
                 \end{align}</math>
-
: 4. Compute the desired partial derivatives:  
+
<li>Compute the desired partial derivatives:  
-
::<math>\begin{align}
+
:<math>\begin{align}
\nabla_{W^{(l)}} J(W,b;x,y) &= \delta^{(l+1)} (a^{(l)})^T, \\
\nabla_{W^{(l)}} J(W,b;x,y) &= \delta^{(l+1)} (a^{(l)})^T, \\
\nabla_{b^{(l)}} J(W,b;x,y) &= \delta^{(l+1)}.
\nabla_{b^{(l)}} J(W,b;x,y) &= \delta^{(l+1)}.
\end{align}</math>
\end{align}</math>
 +
</ol>
-
<br>
+
 
-
'''Implementation note:''' In steps 2 and 3 above, we need to compute <math>\textstyle f'(z^{(l)}_i)</math> for each value of <math>\textstyle i</math>.  
+
'''Implementation note:''' In steps 2 and 3 above, we need to compute <math>\textstyle f'(z^{(l)}_i)</math> for each value of <math>\textstyle i</math>. Assuming <math>\textstyle f(z)</math> is the sigmoid activation function, we would already have <math>\textstyle a^{(l)}_i</math> stored away from the forward pass through the network.  Thus, using the expression that we worked out earlier for <math>\textstyle f'(z)</math>,  
-
Assuming <math>\textstyle f(z)</math> is the sigmoid activation function, we would already have <math>\textstyle a^{(l)}_i</math> stored away from the  
+
-
forward pass through the network.  Thus, using the expression that we worked out earlier for <math>\textstyle f'(z)</math>,  
+
we can compute this as <math>\textstyle f'(z^{(l)}_i) = a^{(l)}_i (1- a^{(l)}_i)</math>.   
we can compute this as <math>\textstyle f'(z^{(l)}_i) = a^{(l)}_i (1- a^{(l)}_i)</math>.   
-
<br>
 
-
 
Finally, we are ready to describe the full gradient descent algorithm.  In the pseudo-code
Finally, we are ready to describe the full gradient descent algorithm.  In the pseudo-code
-
below, <math>\textstyle \Delta W^{(l)}</math> is a matrix (of the same dimension as <math>\textstyle W^{(l)}</math>), and  
+
below, <math>\textstyle \Delta W^{(l)}</math> is a matrix (of the same dimension as <math>\textstyle W^{(l)}</math>), and <math>\textstyle \Delta b^{(l)}</math> is a vector (of the same dimension as <math>\textstyle b^{(l)}</math>). Note that in this notation,  
-
<math>\textstyle \Delta b^{(l)}</math> is a vector (of the same dimension as <math>\textstyle b^{(l)}</math>). Note that in this notation,  
+
"<math>\textstyle \Delta W^{(l)}</math>" is a matrix, and in particular it isn't "<math>\textstyle \Delta</math> times <math>\textstyle W^{(l)}</math>." We implement one iteration of batch gradient descent as follows:
-
``<math>\textstyle \Delta W^{(l)}</math>'' is a matrix, and in particular it isn't ``<math>\textstyle \Delta</math> times <math>\textstyle W^{(l)}</math>.''
+
-
We implement one iteration of batch gradient descent as follows:
+
-
: 1. Set <math>\textstyle \Delta W^{(l)} := 0</math>, <math>\textstyle \Delta b^{(l)} := 0</math> (matrix/vector of zeros) for all <math>\textstyle l</math>.
+
<ol>
-
: 2. For <math>\textstyle i = 1</math> to <math>\textstyle m</math>,  
+
<li>Set <math>\textstyle \Delta W^{(l)} := 0</math>, <math>\textstyle \Delta b^{(l)} := 0</math> (matrix/vector of zeros) for all <math>\textstyle l</math>.
-
:: 2a. Use backpropagation to compute <math>\textstyle \nabla_{W^{(l)}} J(W,b;x,y)</math> and  
+
<li>For <math>\textstyle i = 1</math> to <math>\textstyle m</math>,  
 +
<ol type="a">
 +
<li>Use backpropagation to compute <math>\textstyle \nabla_{W^{(l)}} J(W,b;x,y)</math> and  
<math>\textstyle \nabla_{b^{(l)}} J(W,b;x,y)</math>.
<math>\textstyle \nabla_{b^{(l)}} J(W,b;x,y)</math>.
-
:: 2b. Set <math>\textstyle \Delta W^{(l)} := \Delta W^{(l)} + \nabla_{W^{(l)}} J(W,b;x,y)</math>.  
+
<li>Set <math>\textstyle \Delta W^{(l)} := \Delta W^{(l)} + \nabla_{W^{(l)}} J(W,b;x,y)</math>.  
-
:: 2c. Set <math>\textstyle \Delta b^{(l)} := \Delta b^{(l)} + \nabla_{b^{(l)}} J(W,b;x,y)</math>.  
+
<li>Set <math>\textstyle \Delta b^{(l)} := \Delta b^{(l)} + \nabla_{b^{(l)}} J(W,b;x,y)</math>.  
-
: 3. Update the parameters:
+
</ol>
 +
 
 +
<li>Update the parameters:
:<math>\begin{align}
:<math>\begin{align}
W^{(l)} &= W^{(l)} - \alpha \left[ \left(\frac{1}{m} \Delta W^{(l)} \right) + \lambda W^{(l)}\right] \\
W^{(l)} &= W^{(l)} - \alpha \left[ \left(\frac{1}{m} \Delta W^{(l)} \right) + \lambda W^{(l)}\right] \\
b^{(l)} &= b^{(l)} - \alpha \left[\frac{1}{m} \Delta b^{(l)}\right]
b^{(l)} &= b^{(l)} - \alpha \left[\frac{1}{m} \Delta b^{(l)}\right]
\end{align}</math>
\end{align}</math>
 +
</ol>
 +
 +
To train our neural network, we can now repeatedly take steps of gradient descent to reduce our cost function <math>\textstyle J(W,b)</math>.
 +
 +
 +
{{Sparse_Autoencoder}}
 +
-
To train our neural network, we can now repeatedly take steps of gradient
+
{{Languages|反向传导算法|中文}}
-
descent to reduce our cost function <math>\textstyle J(W,b)</math>.
+

Latest revision as of 12:50, 7 April 2013

Personal tools