Linear Decoders

From Ufldl

Jump to: navigation, search
(Linear Decoder)
(Sparse Autoencoder Recap)
Line 1: Line 1:
【原文】:
【原文】:
== Sparse Autoencoder Recap ==
== Sparse Autoencoder Recap ==
-
 
-
【初译】:
 
-
 
-
稀疏自编码重述
 
-
 
-
【一校】:
 
-
 
-
稀疏自编码重述
 
-
 
-
【原文】:
 
In the sparse autoencoder, we had 3 layers of neurons: an input layer, a hidden layer and an output layer.  In our previous description
In the sparse autoencoder, we had 3 layers of neurons: an input layer, a hidden layer and an output layer.  In our previous description
Line 16: Line 6:
In these notes, we describe a modified version of the autoencoder in which some of the neurons use a different activation function.
In these notes, we describe a modified version of the autoencoder in which some of the neurons use a different activation function.
This will result in a model that is sometimes simpler to apply, and can also be more robust to variations in the parameters.  
This will result in a model that is sometimes simpler to apply, and can also be more robust to variations in the parameters.  
-
 
-
【初译】:
 
-
 
-
在稀疏自编码中,有三层:输入层,隐含层和输出层。在之前对自编码的定义(在神经网络中),位于神经网络中的每个神经元采用相同激励机制。在这些记录中,我们描述了一个修改版的自编码,其中一些神经元采用另外的激励机制。这产生一个更简易于应用,针对参数变化稳健性更佳的模型。
 
-
 
-
【一校】:
 
-
 
-
稀疏自编码器包含3层神经元,分别是输入层,隐含层以及输出层。
 
-
从前面(神经网络)自编码器描述可知,位于神经网络中的神经元都采用相同的激励函数。
 
-
在注解中,我们修改了自编码器定义,使得某些神经元采用不同的激励函数。这样得到的模型更容易应用,而且模型对参数的变化也更为鲁棒。
 
-
 
-
【原文】:
 
Recall that each neuron (in the output layer) computed the following:
Recall that each neuron (in the output layer) computed the following:
Line 39: Line 17:
where <math>a^{(3)}</math> is the output.  In the autoencoder, <math>a^{(3)}</math> is our approximate reconstruction of the input <math>x = a^{(1)}</math>.  
where <math>a^{(3)}</math> is the output.  In the autoencoder, <math>a^{(3)}</math> is our approximate reconstruction of the input <math>x = a^{(1)}</math>.  
-
 
-
【初译】:
 
-
 
-
每一个神经元(输出层)计算方式如下:
 
<math>
<math>
Line 50: Line 24:
\end{align}
\end{align}
</math>
</math>
-
 
-
这里 <math>a^{(3)}</math> 是输出. 在自编码中, <math>a^{(3)}</math> 是对输入<math>x = a^{(1)}</math>的近似重建。
 
-
 
-
【一校】:
 
-
 
-
回想一下,输出层神经元计算公式如下:
 
<math>
<math>
Line 63: Line 31:
\end{align}
\end{align}
</math>
</math>
-
 
-
其中 <math>a^{(3)}</math> 是输出. 在自编码器中, <math>a^{(3)}</math> 近似重构了输入<math>x = a^{(1)}</math>。
 
-
 
-
【原文】:
 
Because we used a sigmoid activation function for <math>f(z^{(3)})</math>, we needed to constrain or scale the inputs to be in the range <math>[0,1]</math>,  
Because we used a sigmoid activation function for <math>f(z^{(3)})</math>, we needed to constrain or scale the inputs to be in the range <math>[0,1]</math>,  
Line 73: Line 37:
no longer constrained to <math>[0,1]</math> and it's not clear what the best way is to scale the data to ensure it fits into the constrained range.
no longer constrained to <math>[0,1]</math> and it's not clear what the best way is to scale the data to ensure it fits into the constrained range.
-
【初译】:
+
== Linear Decoder ==
-
对于 <math>f(z^{(3)})</math>采用一个S型激励函数后,因S型函数输出值域为 <math>[0,1]</math>,需限制输入的范围为 <math>[0,1]</math>。有一些数据组,例如MNIST手写数字库中其输入输出范围符合极佳,但这种情况难以满足。例如,若采用PCA白化,输入将不再限制于 <math>[0,1]</math>,虽可通过缩放数据来确保其符合特定范围内,显然,这不是最好的方式。
+
One easy fix for this problem is to set <math>a^{(3)} = z^{(3)}</math>.  Formally, this is achieved by having the output
 +
nodes use an activation function that's the identity function <math>f(z) = z</math>, so that <math>a^{(3)} = f(z^{(3)}) = z^{(3)}</math>.
 +
This particular activation function <math>f(\cdot)</math> is called the '''linear activation function''' (though perhaps
 +
"identity activation function" would have been a better name).  Note however that in the ''hidden'' layer of the network, we still use a sigmoid (or tanh) activation function,
 +
so that the hidden unit activations are given by (say) <math>\textstyle a^{(2)} = \sigma(W^{(1)}x + b^{(1)})</math>, where <math>\sigma(\cdot)</math> is the sigmoid function,  
 +
<math>x</math> is the input, and <math>W^{(1)}</math> and <math>b^{(1)}</math> are the weight and bias terms for the hidden units.
 +
It is only in the ''output'' layer that we use the linear activation function. 
-
【一校】:
+
An autoencoder in this configuration--with a sigmoid (or tanh) hidden layer and a linear output layer--is called a '''linear decoder'''.
 +
In this model, we have <math>\hat{x} = a^{(3)} = z^{(3)} = W^{(2)}a + b^{(2)}</math>. Because the output <math>\hat{x} </math> is a now linear function of the hidden unit activations, by varying <math>W^{(2)}</math>, each output unit <math>a^{(3)}</math> can be made to produce values greater than 1 or less than 0 as well.  This allows us to train the sparse autoencoder real-valued inputs without needing to pre-scale every example to a specific range. 
-
S型激励函数输出范围是<math>[0,1]</math>,当<math>f(z^{(3)})</math>采用该激励函数时,就要对输入限制或缩放,使其位于<math>[0,1]</math>范围中。一些数据集,比如MNIST,能方便将输出缩放到[0,1]中,但是很难满足对输入值的要求。比如,PCA白化处理的输入并不满足<math>[0,1]</math>范围要求,也不清楚是否有最好的办法可以将数据缩放到特定范围中。
+
Since we have changed the activation function of the output units, the gradients of the output units also change. Recall that for each output unit, we had set set the error terms as follows:
 +
:<math>
 +
\begin{align}
 +
\delta_i^{(3)}
 +
= \frac{\partial}{\partial z_i} \;\;
 +
        \frac{1}{2} \left\|y - \hat{x}\right\|^2 = - (y_i - \hat{x}_i) \cdot f'(z_i^{(3)})
 +
\end{align}
 +
</math>
 +
where <math>y = x</math> is the desired output, <math>\hat{x}</math> is the output of our autoencoder, and <math>f(\cdot)</math> is our activation function.  Because in
 +
the output layer we now have <math>f(z) = z</math>, that implies <math>f'(z) = 1</math> and thus
 +
the above now simplifies to:
 +
:<math>
 +
\begin{align}
 +
\delta_i^{(3)} = - (y_i - \hat{x}_i)
 +
\end{align}
 +
</math>
 +
:<math>
 +
\begin{align}
 +
\delta_i^{(3)}
 +
= \frac{\partial}{\partial z_i} \;\;
 +
        \frac{1}{2} \left\|y - \hat{x}\right\|^2 = - (y_i - \hat{x}_i) \cdot f'(z_i^{(3)})
 +
\end{align}
 +
</math>
 +
 
 +
 
 +
Of course, when using backpropagation to compute the error terms for the ''hidden'' layer:
 +
:<math>
 +
\begin{align}
 +
\delta^{(2)} &= \left( (W^{(2)})^T\delta^{(3)}\right) \bullet f'(z^{(2)})
 +
\end{align}
 +
</math>
 +
Because the hidden layer is using a sigmoid (or tanh) activation <math>f</math>, in the equation above <math>f'(\cdot)</math> should still be the
 +
derivative of the sigmoid (or tanh) function.
== Linear Decoder ==
== Linear Decoder ==

Revision as of 05:13, 8 March 2013

Personal tools