Linear Decoders

From Ufldl

Jump to: navigation, search
(Linear Decoder)
 
Line 1: Line 1:
-
【原文】:
 
== Sparse Autoencoder Recap ==
== Sparse Autoencoder Recap ==
-
 
-
【初译】:
 
-
 
-
稀疏自编码重述
 
-
 
-
【一校】:
 
-
 
-
稀疏自编码重述
 
-
 
-
【原文】:
 
In the sparse autoencoder, we had 3 layers of neurons: an input layer, a hidden layer and an output layer.  In our previous description
In the sparse autoencoder, we had 3 layers of neurons: an input layer, a hidden layer and an output layer.  In our previous description
Line 16: Line 5:
In these notes, we describe a modified version of the autoencoder in which some of the neurons use a different activation function.
In these notes, we describe a modified version of the autoencoder in which some of the neurons use a different activation function.
This will result in a model that is sometimes simpler to apply, and can also be more robust to variations in the parameters.  
This will result in a model that is sometimes simpler to apply, and can also be more robust to variations in the parameters.  
-
 
-
【初译】:
 
-
 
-
在稀疏自编码中,有三层:输入层,隐含层和输出层。在之前对自编码的定义(在神经网络中),位于神经网络中的每个神经元采用相同激励机制。在这些记录中,我们描述了一个修改版的自编码,其中一些神经元采用另外的激励机制。这产生一个更简易于应用,针对参数变化稳健性更佳的模型。
 
-
 
-
【一校】:
 
-
 
-
稀疏自编码器包含3层神经元,分别是输入层,隐含层以及输出层。
 
-
从前面(神经网络)自编码器描述可知,位于神经网络中的神经元都采用相同的激励函数。
 
-
在注解中,我们修改了自编码器定义,使得某些神经元采用不同的激励函数。这样得到的模型更容易应用,而且模型对参数的变化也更为鲁棒。
 
-
 
-
【原文】:
 
Recall that each neuron (in the output layer) computed the following:
Recall that each neuron (in the output layer) computed the following:
Line 39: Line 16:
where <math>a^{(3)}</math> is the output.  In the autoencoder, <math>a^{(3)}</math> is our approximate reconstruction of the input <math>x = a^{(1)}</math>.  
where <math>a^{(3)}</math> is the output.  In the autoencoder, <math>a^{(3)}</math> is our approximate reconstruction of the input <math>x = a^{(1)}</math>.  
-
 
-
【初译】:
 
-
 
-
每一个神经元(输出层)计算方式如下:
 
-
 
-
<math>
 
-
\begin{align}
 
-
z^{(3)} &= W^{(2)} a^{(2)} + b^{(2)} \\
 
-
a^{(3)} &= f(z^{(3)})
 
-
\end{align}
 
-
</math>
 
-
 
-
这里 <math>a^{(3)}</math> 是输出. 在自编码中, <math>a^{(3)}</math> 是对输入<math>x = a^{(1)}</math>的近似重建。
 
-
 
-
【一校】:
 
-
 
-
回想一下,输出层神经元计算公式如下:
 
-
 
-
<math>
 
-
\begin{align}
 
-
z^{(3)} &= W^{(2)} a^{(2)} + b^{(2)} \\
 
-
a^{(3)} &= f(z^{(3)})
 
-
\end{align}
 
-
</math>
 
-
 
-
其中 <math>a^{(3)}</math> 是输出. 在自编码器中, <math>a^{(3)}</math> 近似重构了输入<math>x = a^{(1)}</math>。
 
-
 
-
【原文】:
 
Because we used a sigmoid activation function for <math>f(z^{(3)})</math>, we needed to constrain or scale the inputs to be in the range <math>[0,1]</math>,  
Because we used a sigmoid activation function for <math>f(z^{(3)})</math>, we needed to constrain or scale the inputs to be in the range <math>[0,1]</math>,  
Line 73: Line 22:
no longer constrained to <math>[0,1]</math> and it's not clear what the best way is to scale the data to ensure it fits into the constrained range.
no longer constrained to <math>[0,1]</math> and it's not clear what the best way is to scale the data to ensure it fits into the constrained range.
-
【初译】:
 
-
 
-
对于 <math>f(z^{(3)})</math>采用一个S型激励函数后,因S型函数输出值域为 <math>[0,1]</math>,需限制输入的范围为 <math>[0,1]</math>。有一些数据组,例如MNIST手写数字库中其输入输出范围符合极佳,但这种情况难以满足。例如,若采用PCA白化,输入将不再限制于 <math>[0,1]</math>,虽可通过缩放数据来确保其符合特定范围内,显然,这不是最好的方式。
 
-
 
-
【一校】:
 
-
 
-
S型激励函数输出范围是<math>[0,1]</math>,当<math>f(z^{(3)})</math>采用该激励函数时,就要对输入限制或缩放,使其位于<math>[0,1]</math>范围中。一些数据集,比如MNIST,能方便将输出缩放到[0,1]中,但是很难满足对输入值的要求。比如,PCA白化处理的输入并不满足<math>[0,1]</math>范围要求,也不清楚是否有最好的办法可以将数据缩放到特定范围中。
 
== Linear Decoder ==
== Linear Decoder ==
-
 
-
【原文】:
 
One easy fix for this problem is to set <math>a^{(3)} = z^{(3)}</math>.  Formally, this is achieved by having the output
One easy fix for this problem is to set <math>a^{(3)} = z^{(3)}</math>.  Formally, this is achieved by having the output
Line 92: Line 32:
<math>x</math> is the input, and <math>W^{(1)}</math> and <math>b^{(1)}</math> are the weight and bias terms for the hidden units.  
<math>x</math> is the input, and <math>W^{(1)}</math> and <math>b^{(1)}</math> are the weight and bias terms for the hidden units.  
It is only in the ''output'' layer that we use the linear activation function.   
It is only in the ''output'' layer that we use the linear activation function.   
-
 
-
【初译】:
 
-
 
-
一个简单的解决方案是设定:<math>a^{(3)} = z^{(3)}</math>.型式上看,令输出端口使用一个激励机制,让其符合 <math>f(z) = z</math>,所以有<math>a^{(3)} = f(z^{(3)}) = z^{(3)}</math>. 这个特定的激励机制函数 被称为 '''线性激励函数 '''(这个“一致激励函数”也可以有一个更好的名字). 注意不管隐含层状态如何,我们都使用一个S型(或者是tanh)的激励函数,这样隐藏单元激励表达式为(表明) <math>\textstyle a^{(2)} = \sigma(W^{(1)}x + b^{(1)})</math> ,这里<math>\sigma(\cdot)</math> 是S型函数, <math>x</math> 是输入, <math>W^{(1)}</math> 和<math>b^{(1)}</math> 分别是隐单元的权重和偏项,我们仅在输出层中使用线性激励机制。
 
-
 
-
【一校】:
 
-
 
-
设定<math>a^{(3)} = z^{(3)}</math>可以很简单的解决上述问题。从形式上来看,就是输出端使用恒等函数<math>f(z) = z</math>作为激励函数,于是有<math>a^{(3)} = f(z^{(3)}) = z^{(3)}</math>。我们称该特殊的激励函数为 '''线性激励函数 '''(称为恒等激励函数可能更好些)。
 
-
需要注意,神经网络中隐含层的神经元依然使用S型(或者tanh)激励函数。这样隐含单元的激励公式为 <math>\textstyle a^{(2)} = \sigma(W^{(1)}x + b^{(1)})</math> ,其中<math>\sigma(\cdot)</math> 是S型函数, <math>x</math> 是输入, <math>W^{(1)}</math> 和<math>b^{(1)}</math> 分别是隐单元的权重和偏差项。我们仅在输出层中使用线性激励函数。
 
-
 
-
【原文】:
 
An autoencoder in this configuration--with a sigmoid (or tanh) hidden layer and a linear output layer--is called a '''linear decoder'''.  
An autoencoder in this configuration--with a sigmoid (or tanh) hidden layer and a linear output layer--is called a '''linear decoder'''.  
In this model, we have <math>\hat{x} = a^{(3)} = z^{(3)} = W^{(2)}a + b^{(2)}</math>. Because the output <math>\hat{x} </math> is a now linear function of the hidden unit activations, by varying <math>W^{(2)}</math>, each output unit <math>a^{(3)}</math> can be made to produce values greater than 1 or less than 0 as well.  This allows us to train the sparse autoencoder real-valued inputs without needing to pre-scale every example to a specific range.   
In this model, we have <math>\hat{x} = a^{(3)} = z^{(3)} = W^{(2)}a + b^{(2)}</math>. Because the output <math>\hat{x} </math> is a now linear function of the hidden unit activations, by varying <math>W^{(2)}</math>, each output unit <math>a^{(3)}</math> can be made to produce values greater than 1 or less than 0 as well.  This allows us to train the sparse autoencoder real-valued inputs without needing to pre-scale every example to a specific range.   
-
 
-
【初译】:
 
-
 
-
一个自编码器在这个设置中,有一个S型(或tanh)隐含层和一个线性输出层—被称为'''线性解码'''。在这个模型中,我们有 <math>\hat{x} = a^{(3)} = z^{(3)} = W^{(2)}a + b^{(2)}</math>。因为输出<math>\hat{x} </math>是一个包含隐单元的线性函数,通过改变<math>W^{(2)}</math> 的值,每一个输出单元<math>a^{(3)}</math> 可以用来构造产生大于1或者小于0的值。这让我们可以采用构造稀疏自编码实现实值输入而不需以特定范围来预限定每一个输入样。
 
-
 
-
 
-
【一校】:
 
-
 
-
一个S型或tanh隐含层以及线性输出层构成的自编码器,我们称为'''线性解码器'''。
 
-
在这个线性解码器模型中,<math>\hat{x} = a^{(3)} = z^{(3)} = W^{(2)}a + b^{(2)}</math>。因为输出<math>\hat{x} </math>是隐单元激励输出的线性函数,改变<math>W^{(2)}</math> ,可以使输出值<math>a^{(3)}</math> 大于1或者小于0。这使得我们可以用实值输入来训练稀疏自编码器,避免预先缩放样本到给定范围。
 
-
 
-
【原文】:
 
Since we have changed the activation function of the output units, the gradients of the output units also change. Recall that for each output unit, we had set set the error terms as follows:
Since we have changed the activation function of the output units, the gradients of the output units also change. Recall that for each output unit, we had set set the error terms as follows:
Line 135: Line 52:
\end{align}
\end{align}
</math>
</math>
-
 
-
 
-
【初译】:
 
-
 
-
改变了输出单元的激励机制,这个输出单元梯度也随之改变了。回顾之前每一个输出单元,我们设定误差项如下:
 
-
:<math>
 
-
\begin{align}
 
-
\delta_i^{(3)}
 
-
= \frac{\partial}{\partial z_i} \;\;
 
-
        \frac{1}{2} \left\|y - \hat{x}\right\|^2 = - (y_i - \hat{x}_i) \cdot f'(z_i^{(3)})
 
-
\end{align}
 
-
</math>
 
-
 
-
这里 <math>y = x</math>是所期望的输出, <math>\hat{x}</math> 是自编码的输出, <math>f(\cdot)</math>  是激励函数.因为在输出层<math>f(z) = z</math>, 取导 <math>f'(z) = 1</math>,所以之前推导可以简化为
 
-
:<math>
 
-
\begin{align}
 
-
\delta_i^{(3)} = - (y_i - \hat{x}_i)
 
-
\end{align}
 
-
</math>
 
-
 
-
 
-
【一校】:
 
-
 
-
随着输出单元的激励函数的改变,这个输出单元梯度也相应变化。回顾之前每一个输出单元误差项定义为:
 
-
:<math>
 
-
\begin{align}
 
-
\delta_i^{(3)}
 
-
= \frac{\partial}{\partial z_i} \;\;
 
-
        \frac{1}{2} \left\|y - \hat{x}\right\|^2 = - (y_i - \hat{x}_i) \cdot f'(z_i^{(3)})
 
-
\end{align}
 
-
</math>
 
-
 
-
其中 <math>y = x</math>是所期望的输出, <math>\hat{x}</math> 是自编码器的输出, <math>f(\cdot)</math>  是激励函数.因为在输出层激励函数为<math>f(z) = z</math>, 这样 <math>f'(z) = 1</math>,所以上述公式可以简化为
 
-
:<math>
 
-
\begin{align}
 
-
\delta_i^{(3)} = - (y_i - \hat{x}_i)
 
-
\end{align}
 
-
</math>
 
-
 
-
 
-
 
-
 
-
【原文】:
 
-
 
Of course, when using backpropagation to compute the error terms for the ''hidden'' layer:
Of course, when using backpropagation to compute the error terms for the ''hidden'' layer:
Line 189: Line 62:
derivative of the sigmoid (or tanh) function.
derivative of the sigmoid (or tanh) function.
-
【初译】:
 
-
 
-
当然,若使用反向传播算法来计算隐含层的误差项时:
 
-
 
-
<math>
 
-
\begin{align}
 
-
\delta^{(2)} &= \left( (W^{(2)})^T\delta^{(3)}\right) \bullet f'(z^{(2)})
 
-
\end{align}
 
-
</math>
 
-
 
-
因为隐含层采用一个S型(或tanh)的激励函数<math>f</math>,在上述表达式中,<math>f'(\cdot)</math>依然是S型(或tanh)函数的导数。
 
-
 
-
 
-
【一校】:
 
-
 
-
 
-
当然,若使用反向传播算法来计算隐含层的误差项时:
 
-
 
-
<math>
 
-
\begin{align}
 
-
\delta^{(2)} &= \left( (W^{(2)})^T\delta^{(3)}\right) \bullet f'(z^{(2)})
 
-
\end{align}
 
-
</math>
 
-
因为隐含层采用一个S型(或tanh)的激励函数<math>f</math>,在上述公式中,<math>f'(\cdot)</math>依然是S型(或tanh)函数的导数。
+
{{Languages|线性解码器|中文}}

Latest revision as of 04:06, 8 April 2013

Personal tools