用反向传导思想求导

From Ufldl

Jump to: navigation, search
(Introduction)
(Introduction)
Line 1: Line 1:
[原文]
[原文]
 +
[原文]
[原文]
[原文]
== Introduction ==
== Introduction ==
Line 53: Line 54:
         J(z^{(n_l)})
         J(z^{(n_l)})
</math>
</math>
-
,其中<math>J(z)</math>是我们的“目标函数”(下面解释);
+
,其中<math>J(z)</math>是我们的“目标函数”(下面解释);
<li>对<math>l = n_l-1, n_l-2, n_l-3, \ldots, 2</math>,
<li>对<math>l = n_l-1, n_l-2, n_l-3, \ldots, 2</math>,
对每个第<math>l</math>层中的节点<math>i</math>,令  
对每个第<math>l</math>层中的节点<math>i</math>,令  
Line 167: Line 168:
[原文]
[原文]
 +
 +
== Examples ==
 +
 +
To illustrate the use of the backpropagation idea to compute derivatives with respect to the inputs, we will use two functions from the section on [[Sparse Coding: Autoencoder Interpretation | sparse coding]], in examples 1 and 2. In example 3, we use a function from [[Independent Component Analysis | independent component analysis]] to illustrate the use of this idea to compute derivates with respect to weights, and in this specific case, what to do in the case of tied or repeated weights.
 +
 +
[初译]
 +
 +
实例
 +
 +
为了描述如何使用反向传播思想计算关于输入的导数,我们要在例1,例2中用到稀疏编码一节中的两个函数。在例3中,我们使用一个独立成分分析一节中的函数来描述使用此思想计算关于权重的偏导的方法,以及在这种特殊情况下,如何处理绑定或重复的权重的情况。
 +
 +
[一审]
 +
 +
为了描述如何使用反向传播思想计算对于输入的导数,我们在例1,例2中用到稀疏编码一节中的两个函数。在例3中,我们使用一个独立成分分析一节中的函数来描述如何使用此思想计算对于权重的偏导,以及在这个特定例子中,如何处理相同或重复的权重的情况。
 +
 +
[原文]
 +
 +
=== Example 1: Objective for weight matrix in sparse coding ===
 +
 +
Recall for [[Sparse Coding: Autoencoder Interpretation | sparse coding]], the objective function for the weight matrix <math>A</math>, given the feature matrix <math>s</math>:
 +
:<math>F(A; s) = \lVert As - x \rVert_2^2 + \gamma \lVert A \rVert_2^2</math>
 +
 +
[初译]
 +
 +
例1:稀疏编码中权矩阵的目标
 +
 +
回想稀疏编码一节中权矩阵<math>A</math>的目标函数,给定特征矩阵<math>s</math>:
 +
:<math>F(A; s) = \lVert As - x \rVert_2^2 + \gamma \lVert A \rVert_2^2</math>
 +
 +
[一审]
 +
 +
例1:对稀疏编码中权重矩阵的目标函数
 +
 +
回顾稀疏编码一节中给定特征矩阵<math>s</math>,对权重矩阵<math>A</math>的目标函数,:
 +
:<math>F(A; s) = \lVert As - x \rVert_2^2 + \gamma \lVert A \rVert_2^2</math>
 +
 +
[原文]
 +
 +
We would like to find the gradient of <math>F</math> with respect to <math>A</math>, or in symbols, <math>\nabla_A F(A)</math>. Since the objective function is a sum of two terms in <math>A</math>, the gradient is the sum of gradients of each of the individual terms. The gradient of the second term is trivial, so we will consider the gradient of the first term instead.
 +
 +
[初译]
 +
 +
我们希望求<math>F</math>关于<math>A</math>的梯度,即<math>\nabla_A F(A)</math>。因为目标函数是关于<math>A</math>的两项之和,所以它的梯度是各项梯度的和。第二项的梯度是显而易见的,因此我们只考虑第一项的梯度。
 +
 +
[一审]
 +
 +
我们希望求<math>F</math>对于<math>A</math>的梯度,即<math>\nabla_A F(A)</math> 。因为目标函数是两个含<math>A</math>的式子之和,所以它的梯度是每个式子的梯度之和。第二项的梯度很容易求,因此我们只考虑第一项的梯度。
 +
 +
[原文]
 +
 +
The first term, <math>\lVert As - x \rVert_2^2</math>, can be seen as an instantiation of neural network taking <math>s</math> as an input, and proceeding in four steps, as described and illustrated in the paragraph and diagram below:
 +
 +
<ol>
 +
<li>Apply <math>A</math> as the weights from the first layer to the second layer.
 +
<li>Subtract <math>x</math> from the activation of the second layer, which uses the identity activation function.
 +
<li>Pass this unchanged to the third layer, via identity weights. Use the square function as the activation function for the third layer.
 +
<li>Sum all the activations of the third layer.
 +
</ol>
 +
 +
[[File:Backpropagation Method Example 1.png | 400px]]
 +
 +
[初译]
 +
 +
第一项,<math>\lVert As - x \rVert_2^2</math> ,可以看成一个具体的用<math>s</math>做输入的神经网络,下面描述接下来四步:
 +
<ol>
 +
<li>把<math>A</math>作为第一层到第二层的权重。
 +
<li>将第二层的激励减<math>x</math> ,它使用了恒等激励函数。
 +
<li>通过单位权重将结果不变地传到第三层。在第三层使用平方函数作为激励函数。
 +
<li>将第三层的所有激励相加。
 +
</ol>
 +
 +
[一审]
 +
 +
第一项,<math>\lVert As - x \rVert_2^2</math>,可以看成一个用<math>s</math>做输入的神经网络的实例,然后进行如下文字描述并图表示意的四步:
 +
<ol>
 +
<li>把<math>A</math>作为第一层到第二层的权重。
 +
<li>将第二层的激励减<math>x</math>,第二层使用了恒等激励函数。
 +
<li>通过恒等权重将结果不变地传到第三层。在第三层使用平方函数作为激励函数。
 +
<li>将第三层的所有激励相加。
 +
</ol>
 +
 +
[原文]
 +
 +
The weights and activation functions of this network are as follows:
 +
<table align="center">
 +
<tr><th width="50px">Layer</th><th width="200px">Weight</th><th width="200px">Activation function <math>f</math></th></tr>
 +
<tr>
 +
<td>1</td>
 +
<td><math>A</math></td>
 +
<td><math>f(z_i) = z_i</math> (identity)</td>
 +
</tr>
 +
<tr>
 +
<td>2</td>
 +
<td><math>I</math> (identity)</td>
 +
<td><math>f(z_i) = z_i - x_i</math></td>
 +
</tr>
 +
<tr>
 +
<td>3</td>
 +
<td>N/A</td>
 +
<td><math>f(z_i) = z_i^2</math></td>
 +
</tr>
 +
</table>
 +
To have <math>J(z^{(3)}) = F(x)</math>, we can set <math>J(z^{(3)}) = \sum_k J(z^{(3)}_k)</math>.
 +
 +
[初译]
 +
 +
该网络的权重和激励函数如下表所示:
 +
<table align="center">
 +
<tr><th width="50px">层</th><th width="200px">权重</th><th width="200px">激励函数 <math>f</math></th></tr>
 +
<tr>
 +
<td>1</td>
 +
<td><math>A</math></td>
 +
<td><math>f(z_i) = z_i</math> (恒等)</td>
 +
</tr>
 +
<tr>
 +
<td>2</td>
 +
<td><math>I</math> (单位阵)</td>
 +
<td><math>f(z_i) = z_i - x_i</math></td>
 +
</tr>
 +
<tr>
 +
<td>3</td>
 +
<td>N/A</td>
 +
<td><math>f(z_i) = z_i^2</math></td>
 +
</tr>
 +
</table>
 +
为了使<math>J(z^{(3)}) = F(x)</math>,我们可令<math>J(z^{(3)}) = \sum_k J(z^{(3)}_k)</math> 。
 +
 +
[一审]
 +
 +
该网络的权重和激励函数如下表所示:
 +
<table align="center">
 +
<tr><th width="50px">层</th><th width="200px">权重</th><th width="200px">激励函数 <math>f</math></th></tr>
 +
<tr>
 +
<td>1</td>
 +
<td><math>A</math></td>
 +
<td><math>f(z_i) = z_i</math> (恒等)</td>
 +
</tr>
 +
<tr>
 +
<td>2</td>
 +
<td><math>I</math> (单位阵)</td>
 +
<td><math>f(z_i) = z_i - x_i</math></td>
 +
</tr>
 +
<tr>
 +
<td>3</td>
 +
<td>N/A</td>
 +
<td><math>f(z_i) = z_i^2</math></td>
 +
</tr>
 +
</table>
 +
为了使<math>J(z^{(3)}) = F(x)</math>,我们可令<math>J(z^{(3)}) = \sum_k J(z^{(3)}_k)</math>。
 +
 +
[原文]
 +
 +
Once we see <math>F</math> as a neural network, the gradient <math>\nabla_X F</math> becomes easy to compute - applying backpropagation yields:
 +
<table align="center">
 +
<tr><th width="50px">Layer</th><th width="200px">Derivative of activation function <math>f'</math></th><th width="200px">Delta</th><th>Input <math>z</math> to this layer</th></tr>
 +
<tr>
 +
<td>3</td>
 +
<td><math>f'(z_i) = 2z_i</math></td>
 +
<td><math>f'(z_i) = 2z_i</math></td>
 +
<td><math>As - x</math></td>
 +
</tr>
 +
<tr>
 +
<td>2</td>
 +
<td><math>f'(z_i) = 1</math></td>
 +
<td><math>\left( I^T \delta^{(3)} \right) \bullet 1</math></td>
 +
<td><math>As</math></td>
 +
</tr>
 +
<tr>
 +
<td>1</td>
 +
<td><math>f'(z_i) = 1</math></td>
 +
<td><math>\left( A^T \delta^{(2)} \right) \bullet 1</math></td>
 +
<td><math>s</math></td>
 +
</tr>
 +
</table>
 +
 +
Hence,
 +
:<math>
 +
\begin{align}
 +
\nabla_X F & = A^T I^T 2(As - x) \\
 +
& = A^T 2(As - x)
 +
\end{align}
 +
</math>
 +
 +
[初译]
 +
 +
一旦我们将<math>F</math>看成神经网络,梯度<math>\nabla_X F</math>就变得容易求了——用反向传播产生:
 +
<table align="center">
 +
<tr><th width="50px">层</th><th width="200px">激励函数的导数<math>f'</math></th><th width="200px">Delta</th><th>该层输入<math>z</math></th></tr>
 +
<tr>
 +
<td>3</td>
 +
<td><math>f'(z_i) = 2z_i</math></td>
 +
<td><math>f'(z_i) = 2z_i</math></td>
 +
<td><math>As - x</math></td>
 +
</tr>
 +
<tr>
 +
<td>2</td>
 +
<td><math>f'(z_i) = 1</math></td>
 +
<td><math>\left( I^T \delta^{(3)} \right) \bullet 1</math></td>
 +
<td><math>As</math></td>
 +
</tr>
 +
<tr>
 +
<td>1</td>
 +
<td><math>f'(z_i) = 1</math></td>
 +
<td><math>\left( A^T \delta^{(2)} \right) \bullet 1</math></td>
 +
<td><math>s</math></td>
 +
</tr>
 +
</table>
 +
因此
 +
 +
:<math>
 +
\begin{align}
 +
\nabla_X F & = A^T I^T 2(As - x) \\
 +
& = A^T 2(As - x)
 +
\end{align}
 +
</math>
 +
 +
[一审]
 +
 +
一旦我们将<math>F</math>看成神经网络,梯度<math>\nabla_X F</math>就变得容易求了——用反向传播得到:
 +
<table align="center">
 +
<tr><th width="50px">层</th><th width="200px">激励函数的导数<math>f'</math></th><th width="200px">Delta</th><th>该层输入<math>z</math></th></tr>
 +
<tr>
 +
<td>3</td>
 +
<td><math>f'(z_i) = 2z_i</math></td>
 +
<td><math>f'(z_i) = 2z_i</math></td>
 +
<td><math>As - x</math></td>
 +
</tr>
 +
<tr>
 +
<td>2</td>
 +
<td><math>f'(z_i) = 1</math></td>
 +
<td><math>\left( I^T \delta^{(3)} \right) \bullet 1</math></td>
 +
<td><math>As</math></td>
 +
</tr>
 +
<tr>
 +
<td>1</td>
 +
<td><math>f'(z_i) = 1</math></td>
 +
<td><math>\left( A^T \delta^{(2)} \right) \bullet 1</math></td>
 +
<td><math>s</math></td>
 +
</tr>
 +
</table>
 +
因此
 +
:<math>
 +
\begin{align}
 +
\nabla_X F & = A^T I^T 2(As - x) \\
 +
& = A^T 2(As - x)
 +
\end{align}
 +
</math>
 +
 +
[原文]
 +
 +
=== Example 2: Smoothed topographic L1 sparsity penalty in sparse coding  ===
 +
 +
Recall the smoothed topographic L1 sparsity penalty on <math>s</math> in [[Sparse Coding: Autoencoder Interpretation | sparse coding]]:
 +
:<math>\sum{ \sqrt{Vss^T + \epsilon} }</math>
 +
where <math>V</math> is the grouping matrix, <math>s</math> is the feature matrix and <math>\epsilon</math> is a constant.
 +
 +
[初译]
 +
 +
例2:稀疏编码中的平滑地形L1稀疏罚函数
 +
 +
回想稀疏编码一节中的平滑地形L1稀疏罚函数<math>s</math>:
 +
:<math>\sum{ \sqrt{Vss^T + \epsilon} }</math>
 +
,其中<math>V</math>是分组矩阵,<math>s</math>是特征矩阵,<math>\epsilon</math> 是一个常数.
 +
 +
[一审]
 +
 +
 +
例2:稀疏编码中的平滑地形L1稀疏罚函数
 +
 +
回顾稀疏编码一节中对<math>s</math>的平滑地形L1稀疏罚函数:
 +
:<math>\sum{ \sqrt{Vss^T + \epsilon} }</math>
 +
,其中<math>V</math>是分组矩阵,<math>s</math>是特征矩阵,<math>\epsilon</math> 是一个常数.
 +
 +
[原文]
 +
 +
We would like to find <math>\nabla_s \sum{ \sqrt{Vss^T + \epsilon} }</math>. As above, let's see this term as an instantiation of a neural network:
 +
 +
[[File:Backpropagation Method Example 2.png | 600px]]
 +
 +
[初译]
 +
 +
我们希望求<math>\nabla_s \sum{ \sqrt{Vss^T + \epsilon} }</math> 。像上面那样,我们把这一项看做一个具体的神经网络:
 +
 +
[一审]
 +
 +
我们希望求<math>\nabla_s \sum{ \sqrt{Vss^T + \epsilon} }</math> 。像上面那样,我们把这一项看做一个神经网络的实例:
 +
 +
[原文]
 +
 +
The weights and activation functions of this network are as follows:
 +
<table align="center">
 +
<tr><th width="50px">Layer</th><th width="200px">Weight</th><th width="200px">Activation function <math>f</math></th></tr>
 +
<tr>
 +
<td>1</td>
 +
<td><math>I</math></td>
 +
<td><math>f(z_i) = z_i^2</math></td>
 +
</tr>
 +
<tr>
 +
<td>2</td>
 +
<td><math>V</math></td>
 +
<td><math>f(z_i) = z_i</math></td>
 +
</tr>
 +
<tr>
 +
<td>3</td>
 +
<td><math>I</math></td>
 +
<td><math>f(z_i) = z_i + \epsilon</math></td>
 +
</tr>
 +
<tr>
 +
<td>4</td>
 +
<td>N/A</td>
 +
<td><math>f(z_i) = z_i^{\frac{1}{2}}</math></td>
 +
</tr>
 +
</table>
 +
To have <math>J(z^{(4)}) = F(x)</math>, we can set <math>J(z^{(4)}) = \sum_k J(z^{(4)}_k)</math>.
 +
 +
[初译]
 +
 +
该网络的权重和激励函数如下表所示:
 +
<table align="center">
 +
<tr><th width="50px">层</th><th width="200px">权重</th><th width="200px">激励函数 <math>f</math></th></tr>
 +
<tr>
 +
<td>1</td>
 +
<td><math>I</math></td>
 +
<td><math>f(z_i) = z_i^2</math></td>
 +
</tr>
 +
<tr>
 +
<td>2</td>
 +
<td><math>V</math></td>
 +
<td><math>f(z_i) = z_i</math></td>
 +
</tr>
 +
<tr>
 +
<td>3</td>
 +
<td><math>I</math></td>
 +
<td><math>f(z_i) = z_i + \epsilon</math></td>
 +
</tr>
 +
<tr>
 +
<td>4</td>
 +
<td>N/A</td>
 +
<td><math>f(z_i) = z_i^{\frac{1}{2}}</math></td>
 +
</tr>
 +
</table>
 +
为使 <math>J(z^{(4)}) = F(x)</math>,我们可令 <math>J(z^{(4)}) = \sum_k J(z^{(4)}_k)</math>。
 +
 +
[一审]
 +
 +
该网络的权重和激励函数如下表所示:
 +
<table align="center">
 +
<tr><th width="50px">层</th><th width="200px">权重</th><th width="200px">激励函数 <math>f</math></th></tr>
 +
<tr>
 +
<td>1</td>
 +
<td><math>I</math></td>
 +
<td><math>f(z_i) = z_i^2</math></td>
 +
</tr>
 +
<tr>
 +
<td>2</td>
 +
<td><math>V</math></td>
 +
<td><math>f(z_i) = z_i</math></td>
 +
</tr>
 +
<tr>
 +
<td>3</td>
 +
<td><math>I</math></td>
 +
<td><math>f(z_i) = z_i + \epsilon</math></td>
 +
</tr>
 +
<tr>
 +
<td>4</td>
 +
<td>N/A</td>
 +
<td><math>f(z_i) = z_i^{\frac{1}{2}}</math></td>
 +
</tr>
 +
</table>
 +
为使 <math>J(z^{(4)}) = F(x)</math>,我们可令 <math>J(z^{(4)}) = \sum_k J(z^{(4)}_k)</math>。
 +
 +
[原文]
 +
 +
Once we see <math>F</math> as a neural network, the gradient <math>\nabla_X F</math> becomes easy to compute - applying backpropagation yields:
 +
<table align="center">
 +
<tr><th width="50px">Layer</th><th width="200px">Derivative of activation function <math>f'</math>
 +
</th><th width="200px">Delta</th><th>Input <math>z</math> to this layer</th></tr>
 +
<tr>
 +
<td>4</td>
 +
<td><math>f'(z_i) = \frac{1}{2} z_i^{-\frac{1}{2}}</math></td>
 +
<td><math>f'(z_i) = \frac{1}{2} z_i^{-\frac{1}{2}}</math></td>
 +
<td><math>(Vss^T + \epsilon)</math></td>
 +
</tr>
 +
<tr>
 +
<td>3</td>
 +
<td><math>f'(z_i) = 1</math></td>
 +
<td><math>\left( I^T \delta^{(4)} \right) \bullet 1</math></td>
 +
<td><math>Vss^T</math></td>
 +
</tr>
 +
<tr>
 +
<td>2</td>
 +
<td><math>f'(z_i) = 1</math></td>
 +
<td><math>\left( V^T \delta^{(3)} \right) \bullet 1</math></td>
 +
<td><math>ss^T</math></td>
 +
</tr>
 +
<tr>
 +
<td>1</td>
 +
<td><math>f'(z_i) = 2z_i</math></td>
 +
<td><math>\left( I^T \delta^{(2)} \right) \bullet 2s</math></td>
 +
<td><math>s</math></td>
 +
</tr>
 +
</table>
 +
 +
Hence,
 +
:<math>
 +
\begin{align}
 +
\nabla_X F & = I^T V^T I^T \frac{1}{2}(Vss^T + \epsilon)^{-\frac{1}{2}} \bullet 2s \\
 +
& = V^T \frac{1}{2}(Vss^T + \epsilon)^{-\frac{1}{2}} \bullet 2s \\
 +
& = V^T (Vss^T + \epsilon)^{-\frac{1}{2}} \bullet s
 +
\end{align}
 +
</math>
 +
 +
[初译]
 +
 +
一旦我们把<math>F</math>看做一个神经网络,梯度<math>\nabla_X F</math> 变得好求了——使用反向传播产生
 +
 +
<table align="center">
 +
<tr><th width="50px">层</th><th width="200px">激励函数的导数 <math>f'</math>
 +
</th><th width="200px">Delta</th><th>该层输入<math>z</math></th></tr>
 +
<tr>
 +
<td>4</td>
 +
<td><math>f'(z_i) = \frac{1}{2} z_i^{-\frac{1}{2}}</math></td>
 +
<td><math>f'(z_i) = \frac{1}{2} z_i^{-\frac{1}{2}}</math></td>
 +
<td><math>(Vss^T + \epsilon)</math></td>
 +
</tr>
 +
<tr>
 +
<td>3</td>
 +
<td><math>f'(z_i) = 1</math></td>
 +
<td><math>\left( I^T \delta^{(4)} \right) \bullet 1</math></td>
 +
<td><math>Vss^T</math></td>
 +
</tr>
 +
<tr>
 +
<td>2</td>
 +
<td><math>f'(z_i) = 1</math></td>
 +
<td><math>\left( V^T \delta^{(3)} \right) \bullet 1</math></td>
 +
<td><math>ss^T</math></td>
 +
</tr>
 +
<tr>
 +
<td>1</td>
 +
<td><math>f'(z_i) = 2z_i</math></td>
 +
<td><math>\left( I^T \delta^{(2)} \right) \bullet 2s</math></td>
 +
<td><math>s</math></td>
 +
</tr>
 +
</table>
 +
因此
 +
:<math>
 +
\begin{align}
 +
\nabla_X F & = I^T V^T I^T \frac{1}{2}(Vss^T + \epsilon)^{-\frac{1}{2}} \bullet 2s \\
 +
& = V^T \frac{1}{2}(Vss^T + \epsilon)^{-\frac{1}{2}} \bullet 2s \\
 +
& = V^T (Vss^T + \epsilon)^{-\frac{1}{2}} \bullet s
 +
\end{align}
 +
</math>
 +
 +
[一审]
 +
 +
一旦我们把<math>F</math>看做一个神经网络,梯度<math>\nabla_X F</math> 变得很容易计算——使用反向传播得到:
 +
<table align="center">
 +
<tr><th width="50px">层</th><th width="200px">激励函数的导数 <math>f'</math>
 +
</th><th width="200px">Delta</th><th>该层输入<math>z</math></th></tr>
 +
<tr>
 +
<td>4</td>
 +
<td><math>f'(z_i) = \frac{1}{2} z_i^{-\frac{1}{2}}</math></td>
 +
<td><math>f'(z_i) = \frac{1}{2} z_i^{-\frac{1}{2}}</math></td>
 +
<td><math>(Vss^T + \epsilon)</math></td>
 +
</tr>
 +
<tr>
 +
<td>3</td>
 +
<td><math>f'(z_i) = 1</math></td>
 +
<td><math>\left( I^T \delta^{(4)} \right) \bullet 1</math></td>
 +
<td><math>Vss^T</math></td>
 +
</tr>
 +
<tr>
 +
<td>2</td>
 +
<td><math>f'(z_i) = 1</math></td>
 +
<td><math>\left( V^T \delta^{(3)} \right) \bullet 1</math></td>
 +
<td><math>ss^T</math></td>
 +
</tr>
 +
<tr>
 +
<td>1</td>
 +
<td><math>f'(z_i) = 2z_i</math></td>
 +
<td><math>\left( I^T \delta^{(2)} \right) \bullet 2s</math></td>
 +
<td><math>s</math></td>
 +
</tr>
 +
</table>
 +
因此
 +
:<math>
 +
\begin{align}
 +
\nabla_X F & = I^T V^T I^T \frac{1}{2}(Vss^T + \epsilon)^{-\frac{1}{2}} \bullet 2s \\
 +
& = V^T \frac{1}{2}(Vss^T + \epsilon)^{-\frac{1}{2}} \bullet 2s \\
 +
& = V^T (Vss^T + \epsilon)^{-\frac{1}{2}} \bullet s
 +
\end{align}
 +
</math>
 +
 +
[原文]
 +
 +
=== Example 3: ICA reconstruction cost ===
 +
 +
Recall the [[Independent Component Analysis | independent component analysis (ICA)]] reconstruction cost term:
 +
<math>\lVert W^TWx - x \rVert_2^2</math>
 +
where <math>W</math> is the weight matrix and <math>x</math> is the input.
 +
 +
[初译]
 +
 +
例3:ICA重建成本
 +
 +
回故独立成分分析(ICA)一节重建成本项:<math>\lVert W^TWx - x \rVert_2^2</math> ,其中<math>W</math>是权矩阵,<math>x</math> 是输入。
 +
 +
[一审]
 +
 +
例3:ICA重建成本
 +
回顾独立成分分析(ICA)一节重建成本一项:<math>\lVert W^TWx - x \rVert_2^2</math> ,其中<math>W</math>是权矩阵,<math>x</math>是输入。
 +
 +
[原文]
 +
 +
We would like to find <math>\nabla_W \lVert W^TWx - x \rVert_2^2</math> - the derivative of the term with respect to the '''weight matrix''', rather than the '''input''' as in the earlier two examples. We will still proceed similarly though, seeing this term as an instantiation of a neural network:
 +
 +
[[File:Backpropagation Method Example 3.png | 400px]]
 +
 +
[初译]
 +
 +
我们希望计算 <math>\nabla_W \lVert W^TWx - x \rVert_2^2</math> ——该项关于权矩阵的导数,而不是像前两例中关于输入的导数。不过我们仍然使用相似的过程,把该项看做一个具体的神经网络:
 +
 +
[一审]
 +
 +
我们希望计算 <math>\nabla_W \lVert W^TWx - x \rVert_2^2</math>  ——该项关于权重矩阵的导数,而不是像前两例中对于输入的导数。不过我们仍然用类似的方法处理,把该项看做一个神经网络的实例:
 +
 +
[原文]
 +
 +
The weights and activation functions of this network are as follows:
 +
<table align="center">
 +
<tr><th width="50px">Layer</th><th width="200px">Weight</th><th width="200px">Activation function <math>f</math></th></tr>
 +
<tr>
 +
<td>1</td>
 +
<td><math>W</math></td>
 +
<td><math>f(z_i) = z_i</math></td>
 +
</tr>
 +
<tr>
 +
<td>2</td>
 +
<td><math>W^T</math></td>
 +
<td><math>f(z_i) = z_i</math></td>
 +
</tr>
 +
<tr>
 +
<td>3</td>
 +
<td><math>I</math></td>
 +
<td><math>f(z_i) = z_i - x_i</math></td>
 +
</tr>
 +
<tr>
 +
<td>4</td>
 +
<td>N/A</td>
 +
<td><math>f(z_i) = z_i^2</math></td>
 +
</tr>
 +
</table>
 +
To have <math>J(z^{(4)}) = F(x)</math>, we can set <math>J(z^{(4)}) = \sum_k J(z^{(4)}_k)</math>.
 +
 +
[初译]
 +
 +
该网络的权重和激励函数如下表所示:
 +
<table align="center">
 +
<tr><th width="50px">层</th><th width="200px">权重</th><th width="200px">激励函数 <math>f</math></th></tr>
 +
<tr>
 +
<td>1</td>
 +
<td><math>W</math></td>
 +
<td><math>f(z_i) = z_i</math></td>
 +
</tr>
 +
<tr>
 +
<td>2</td>
 +
<td><math>W^T</math></td>
 +
<td><math>f(z_i) = z_i</math></td>
 +
</tr>
 +
<tr>
 +
<td>3</td>
 +
<td><math>I</math></td>
 +
<td><math>f(z_i) = z_i - x_i</math></td>
 +
</tr>
 +
<tr>
 +
<td>4</td>
 +
<td>N/A</td>
 +
<td><math>f(z_i) = z_i^2</math></td>
 +
</tr>
 +
</table>
 +
为使<math>J(z^{(4)}) = F(x)</math>,我们可令<math>J(z^{(4)}) = \sum_k J(z^{(4)}_k)</math>。
 +
 +
[一审]
 +
该网络的权重和激励函数如下表所示:
 +
<table align="center">
 +
<tr><th width="50px">层</th><th width="200px">权重</th><th width="200px">激励函数 <math>f</math></th></tr>
 +
<tr>
 +
<td>1</td>
 +
<td><math>W</math></td>
 +
<td><math>f(z_i) = z_i</math></td>
 +
</tr>
 +
<tr>
 +
<td>2</td>
 +
<td><math>W^T</math></td>
 +
<td><math>f(z_i) = z_i</math></td>
 +
</tr>
 +
<tr>
 +
<td>3</td>
 +
<td><math>I</math></td>
 +
<td><math>f(z_i) = z_i - x_i</math></td>
 +
</tr>
 +
<tr>
 +
<td>4</td>
 +
<td>N/A</td>
 +
<td><math>f(z_i) = z_i^2</math></td>
 +
</tr>
 +
</table>
 +
为使<math>J(z^{(4)}) = F(x)</math>,我们可令<math>J(z^{(4)}) = \sum_k J(z^{(4)}_k)</math>。
 +
 +
[原文]
 +
 +
Now that we can see <math>F</math> as a neural network, we can try to compute the gradient <math>\nabla_W F</math>. However, we now face the difficulty that <math>W</math> appears twice in the network. Fortunately, it turns out that if <math>W</math> appears multiple times in the network, the gradient with respect to <math>W</math> is simply the sum of gradients for each instance of <math>W</math> in the network (you may wish to work out a formal proof of this fact to convince yourself). With this in mind, we will proceed to work out the deltas first:
 +
 +
<table align="center">
 +
<tr><th width="50px">Layer</th><th width="200px">Derivative of activation function <math>f'</math>
 +
</th><th width="200px">Delta</th><th>Input <math>z</math> to this layer</th></tr>
 +
<tr>
 +
<td>4</td>
 +
<td><math>f'(z_i) = 2z_i</math></td>
 +
<td><math>f'(z_i) = 2z_i</math></td>
 +
<td><math>(W^TWx - x)</math></td>
 +
</tr>
 +
<tr>
 +
<td>3</td>
 +
<td><math>f'(z_i) = 1</math></td>
 +
<td><math>\left( I^T \delta^{(4)} \right) \bullet 1</math></td>
 +
<td><math>W^TWx</math></td>
 +
</tr>
 +
<tr>
 +
<td>2</td>
 +
<td><math>f'(z_i) = 1</math></td>
 +
<td><math>\left( (W^T)^T \delta^{(3)} \right) \bullet 1</math></td>
 +
<td><math>Wx</math></td>
 +
</tr>
 +
<tr>
 +
<td>1</td>
 +
<td><math>f'(z_i) = 1</math></td>
 +
<td><math>\left( W^T \delta^{(2)} \right) \bullet 1</math></td>
 +
<td><math>x</math></td>
 +
</tr>
 +
</table>
 +
 +
[初译]
 +
 +
既然我们可将<math>F</math>看做神经网络,我们就能计算出梯度 <math>\nabla_W F</math> 。然而,我们还要面对<math>W</math>在网络中出现两次的困难。幸运的是,可以证明如果<math>W</math>在网络中出现两次,那么关于<math>W</math>的梯度是网络中每个含<math>W</math>项的简单相加(你可能希望给出这一事实的严格证明来说服自己)。有了这个事实,我们接下来先计算delta。
 +
<table align="center">
 +
<tr><th width="50px">层</th><th width="200px">激励函数的导数 <math>f'</math>
 +
</th><th width="200px">Delta</th><th>该层输入 <math>z</math></th></tr>
 +
<tr>
 +
<td>4</td>
 +
<td><math>f'(z_i) = 2z_i</math></td>
 +
<td><math>f'(z_i) = 2z_i</math></td>
 +
<td><math>(W^TWx - x)</math></td>
 +
</tr>
 +
<tr>
 +
<td>3</td>
 +
<td><math>f'(z_i) = 1</math></td>
 +
<td><math>\left( I^T \delta^{(4)} \right) \bullet 1</math></td>
 +
<td><math>W^TWx</math></td>
 +
</tr>
 +
<tr>
 +
<td>2</td>
 +
<td><math>f'(z_i) = 1</math></td>
 +
<td><math>\left( (W^T)^T \delta^{(3)} \right) \bullet 1</math></td>
 +
<td><math>Wx</math></td>
 +
</tr>
 +
<tr>
 +
<td>1</td>
 +
<td><math>f'(z_i) = 1</math></td>
 +
<td><math>\left( W^T \delta^{(2)} \right) \bullet 1</math></td>
 +
<td><math>x</math></td>
 +
</tr>
 +
</table>
 +
 +
[一审]
 +
 +
既然我们可将<math>F</math>看做神经网络,我们就能计算出梯度 <math>\nabla_W F</math> 。然而,我们现在面临<math>W</math> 在网络中出现两次的困难。幸运的是,可以证明如果<math>W</math> 在网络中多次,那么对于<math>W</math> 的梯度是对网络中每个<math>W</math> 实例的梯度的简单相加(你需要自己给出对这一事实的严格证明来说服自己)。知道这一点后,我们接下来先计算delta。
 +
<table align="center">
 +
<tr><th width="50px">层</th><th width="200px">激励函数的导数 <math>f'</math>
 +
</th><th width="200px">Delta</th><th>该层输入 <math>z</math></th></tr>
 +
<tr>
 +
<td>4</td>
 +
<td><math>f'(z_i) = 2z_i</math></td>
 +
<td><math>f'(z_i) = 2z_i</math></td>
 +
<td><math>(W^TWx - x)</math></td>
 +
</tr>
 +
<tr>
 +
<td>3</td>
 +
<td><math>f'(z_i) = 1</math></td>
 +
<td><math>\left( I^T \delta^{(4)} \right) \bullet 1</math></td>
 +
<td><math>W^TWx</math></td>
 +
</tr>
 +
<tr>
 +
<td>2</td>
 +
<td><math>f'(z_i) = 1</math></td>
 +
<td><math>\left( (W^T)^T \delta^{(3)} \right) \bullet 1</math></td>
 +
<td><math>Wx</math></td>
 +
</tr>
 +
<tr>
 +
<td>1</td>
 +
<td><math>f'(z_i) = 1</math></td>
 +
<td><math>\left( W^T \delta^{(2)} \right) \bullet 1</math></td>
 +
<td><math>x</math></td>
 +
</tr>
 +
</table>
 +
 +
[原文]
 +
 +
To find the gradients with respect to <math>W</math>, first we find the gradients with respect to each instance of <math>W</math> in the network.
 +
 +
With respect to <math>W^T</math>:
 +
:<math>
 +
\begin{align}
 +
\nabla_{W^T} F & = \delta^{(3)} a^{(2)T} \\
 +
& = 2(W^TWx - x) (Wx)^T
 +
\end{align}
 +
</math>
 +
 +
With respect to <math>W</math>:
 +
:<math>
 +
\begin{align}
 +
\nabla_{W} F & = \delta^{(2)} a^{(1)T} \\
 +
& = (W^T)(2(W^TWx -x)) x^T
 +
\end{align}
 +
</math>
 +
 +
[初译]
 +
 +
为计算关于<math>W</math>的梯度,首先计算网络中每个含<math>W</math>项的梯度。
 +
 +
关于 <math>W^T</math>:
 +
:<math>
 +
\begin{align}
 +
\nabla_{W^T} F & = \delta^{(3)} a^{(2)T} \\
 +
& = 2(W^TWx - x) (Wx)^T
 +
\end{align}
 +
</math>
 +
 +
关于 <math>W</math>:
 +
:<math>
 +
\begin{align}
 +
\nabla_{W} F & = \delta^{(2)} a^{(1)T} \\
 +
& = (W^T)(2(W^TWx -x)) x^T
 +
\end{align}
 +
</math>
 +
 +
[一审]
 +
 +
为计算对于<math>W</math>的梯度,首先计算对网络中每个<math>W</math>实例的梯度。
 +
 +
对于 <math>W^T</math>:
 +
:<math>
 +
\begin{align}
 +
\nabla_{W^T} F & = \delta^{(3)} a^{(2)T} \\
 +
& = 2(W^TWx - x) (Wx)^T
 +
\end{align}
 +
</math>
 +
 +
对于 <math>W</math>:
 +
:<math>
 +
\begin{align}
 +
\nabla_{W} F & = \delta^{(2)} a^{(1)T} \\
 +
& = (W^T)(2(W^TWx -x)) x^T
 +
\end{align}
 +
</math>
 +
 +
[原文]
 +
 +
Taking sums, noting that we need to transpose the gradient with respect to <math>W^T</math> to get the gradient with respect to <math>W</math>, yields the final gradient with respect to <math>W</math> (pardon the slight abuse of notation here):
 +
 +
:<math>
 +
\begin{align}
 +
\nabla_{W} F & = \nabla_{W} F + (\nabla_{W^T} F)^T \\
 +
& = (W^T)(2(W^TWx -x)) x^T + 2(Wx)(W^TWx - x)^T
 +
\end{align}
 +
</math>
 +
 +
[初译]
 +
 +
求和得到关于<math>W^T</math>的最终梯度,注意我们需要求关于<math>W^T</math>梯度的转制来得到关于<math>W^T</math>的梯度(原谅我在这里稍稍乱用了符号):
 +
:<math>
 +
\begin{align}
 +
\nabla_{W} F & = \nabla_{W} F + (\nabla_{W^T} F)^T \\
 +
& = (W^T)(2(W^TWx -x)) x^T + 2(Wx)(W^TWx - x)^T
 +
\end{align}
 +
</math>
 +
 +
[一审]
 +
 +
求和得到对于<math>W^T</math>的最终梯度,注意我们需要求对于<math>W^T</math>梯度的转置来得到关于<math>W^T</math>的梯度(原谅我在这里稍稍滥用了符号):
 +
:<math>
 +
\begin{align}
 +
\nabla_{W} F & = \nabla_{W} F + (\nabla_{W^T} F)^T \\
 +
& = (W^T)(2(W^TWx -x)) x^T + 2(Wx)(W^TWx - x)^T
 +
\end{align}
 +
</math>
== Examples ==
== Examples ==

Revision as of 16:01, 7 March 2013

Personal tools