# 反向传导算法

【原文】：

Suppose we have a fixed training set $\{ (x^{(1)}, y^{(1)}), \ldots, (x^{(m)}, y^{(m)}) \}$ of m training examples. We can train our neural network using batch gradient descent. In detail, for a single training example (x,y), we define the cost function with respect to that single example to be:

【初译】：
【一校】：

\begin{align} J(W,b; x,y) = \frac{1}{2} \left\| h_{W,b}(x) - y \right\|^2. \end{align}
【原文】：
【初译】：
【一校】：

\begin{align} J(W,b) &= \left[ \frac{1}{m} \sum_{i=1}^m J(W,b;x^{(i)},y^{(i)}) \right] + \frac{\lambda}{2} \sum_{l=1}^{n_l-1} \; \sum_{i=1}^{s_l} \; \sum_{j=1}^{s_{l+1}} \left( W^{(l)}_{ji} \right)^2 \\ &= \left[ \frac{1}{m} \sum_{i=1}^m \left( \frac{1}{2} \left\| h_{W,b}(x^{(i)}) - y^{(i)} \right\|^2 \right) \right] + \frac{\lambda}{2} \sum_{l=1}^{n_l-1} \; \sum_{i=1}^{s_l} \; \sum_{j=1}^{s_{l+1}} \left( W^{(l)}_{ji} \right)^2 \end{align}
【原文】：
【初译】：
【一校】：

【原文】：
【初译】：
【一校】：

【原文】：
【初译】：
【一校】：

【原文】：
【初译】：
【一校】：

【原文】：
【初译】：
【一校】：

【原文】：
【初译】：
【一校】：

\begin{align} W_{ij}^{(l)} &= W_{ij}^{(l)} - \alpha \frac{\partial}{\partial W_{ij}^{(l)}} J(W,b) \\ b_{i}^{(l)} &= b_{i}^{(l)} - \alpha \frac{\partial}{\partial b_{i}^{(l)}} J(W,b) \end{align}
【原文】：
【初译】：
【一校】：

【原文】：
【初译】：
【一校】：

【原文】：
【初译】：
【一校】：

\begin{align} \frac{\partial}{\partial W_{ij}^{(l)}} J(W,b) &= \left[ \frac{1}{m} \sum_{i=1}^m \frac{\partial}{\partial W_{ij}^{(l)}} J(W,b; x^{(i)}, y^{(i)}) \right] + \lambda W_{ij}^{(l)} \\ \frac{\partial}{\partial b_{i}^{(l)}} J(W,b) &= \frac{1}{m}\sum_{i=1}^m \frac{\partial}{\partial b_{i}^{(l)}} J(W,b; x^{(i)}, y^{(i)}) \end{align}
【原文】：
【初译】：
【一校】：

【原文】：
【初译】：
【一校】：

【原文】：
【初译】：
【一校】：

1. Perform a feedforward pass, computing the activations for layers L2, L3, and so on up to the output layer $L_{n_l}$.
2. For each output unit i in layer nl (the output layer), set
\begin{align} \delta^{(n_l)}_i = \frac{\partial}{\partial z^{(n_l)}_i} \;\; \frac{1}{2} \left\|y - h_{W,b}(x)\right\|^2 = - (y_i - a^{(n_l)}_i) \cdot f'(z^{(n_l)}_i) \end{align}
3. For $l = n_l-1, n_l-2, n_l-3, \ldots, 2$
For each node i in layer l, set
$\delta^{(l)}_i = \left( \sum_{j=1}^{s_{l+1}} W^{(l)}_{ji} \delta^{(l+1)}_j \right) f'(z^{(l)}_i)$
4. Compute the desired partial derivatives, which are given as:
\begin{align} \frac{\partial}{\partial W_{ij}^{(l)}} J(W,b; x, y) &= a^{(l)}_j \delta_i^{(l+1)} \\ \frac{\partial}{\partial b_{i}^{(l)}} J(W,b; x, y) &= \delta_i^{(l+1)}. \end{align}
【原文】：
【初译】：
【一校】：

The algorithm can then be written:

1. Perform a feedforward pass, computing the activations for layers $\textstyle L_2$, $\textstyle L_3$, up to the output layer $\textstyle L_{n_l}$, using the equations defining the forward propagation steps
2. For the output layer (layer $\textstyle n_l$), set
\begin{align} \delta^{(n_l)} = - (y - a^{(n_l)}) \bullet f'(z^{(n_l)}) \end{align}
3. For $\textstyle l = n_l-1, n_l-2, n_l-3, \ldots, 2$
Set
\begin{align} \delta^{(l)} = \left((W^{(l)})^T \delta^{(l+1)}\right) \bullet f'(z^{(l)}) \end{align}
4. Compute the desired partial derivatives:
\begin{align} \nabla_{W^{(l)}} J(W,b;x,y) &= \delta^{(l+1)} (a^{(l)})^T, \\ \nabla_{b^{(l)}} J(W,b;x,y) &= \delta^{(l+1)}. \end{align}

【原文】：
【初译】：
【一校】：

Finally, we are ready to describe the full gradient descent algorithm. In the pseudo-code below, $\textstyle \Delta W^{(l)}$ is a matrix (of the same dimension as $\textstyle W^{(l)}$), and $\textstyle \Delta b^{(l)}$ is a vector (of the same dimension as $\textstyle b^{(l)}$). Note that in this notation, "$\textstyle \Delta W^{(l)}$" is a matrix, and in particular it isn't "$\textstyle \Delta$ times $\textstyle W^{(l)}$." We implement one iteration of batch gradient descent as follows:

1. Set $\textstyle \Delta W^{(l)} := 0$, $\textstyle \Delta b^{(l)} := 0$ (matrix/vector of zeros) for all $\textstyle l$.
2. For $\textstyle i = 1$ to $\textstyle m$,
1. Use backpropagation to compute $\textstyle \nabla_{W^{(l)}} J(W,b;x,y)$ and $\textstyle \nabla_{b^{(l)}} J(W,b;x,y)$.
2. Set $\textstyle \Delta W^{(l)} := \Delta W^{(l)} + \nabla_{W^{(l)}} J(W,b;x,y)$.
3. Set $\textstyle \Delta b^{(l)} := \Delta b^{(l)} + \nabla_{b^{(l)}} J(W,b;x,y)$.
3. Update the parameters:
\begin{align} W^{(l)} &= W^{(l)} - \alpha \left[ \left(\frac{1}{m} \Delta W^{(l)} \right) + \lambda W^{(l)}\right] \\ b^{(l)} &= b^{(l)} - \alpha \left[\frac{1}{m} \Delta b^{(l)}\right] \end{align}
【原文】：
【初译】：
【一校】：