Linear Decoders

From Ufldl

Jump to: navigation, search
(Created page with "In the autoencoder exercises so far, you have been using '''sigmoid decoders''' to reconstruct the input from the features (activations of the hidden layer). That is, the activat...")
 
Line 1: Line 1:
-
In the autoencoder exercises so far, you have been using '''sigmoid decoders''' to reconstruct the input from the features (activations of the hidden layer). That is, the activation function of the output units has been a sigmoid function. Formally, taking <math>a</math> to be the activations of the hidden units, <math>W^{(2)}, \, b^{(2)}</math> to be the weight and bias terms for the output units, and <math>\hat{x}</math> to be the output of the output units, you have been reconstructing the output as <math>\hat{x} = \sigma(W^{(2)}a + b^{(2)})</math>, where <math>\sigma(x)</math> is the sigmoid function applied to <math>x</math>.
+
== Sparse Autoencoder Recap ==
-
However, in practice, sigmoid decoders are rarely used because of one major limitation - the limited range of the sigmoid function. Since the range of the sigmoid function is the interval <math>[0, 1]</math>, for datasets in which the input data is not naturally represented in this range, additional pre-processing must be done to scale the data into this range. While it may appear such pre-processing would involve simply scaling the data down linearly into the interval <math>[0, 1]</math>, this does not always work, and the additional pre-processing step is an unnecessary complication. (If you're curious about what other kinds of pre-processing might be necessary, you can look at the pre-processing done for the natural image dataset used in the first sparse autoencoder assignment)
+
In the sparse autoencoder, we had 3 layers of neurons: an input layer, a hidden layer and an output layer.  In our previous description
 +
of autoencoders (and of neural networks), every neuron in the neural network used the same activation function.
 +
In these notes, we describe a modified version of the autoencoder in which some of the neurons use a different activation function.
 +
This will result in a model that is sometimes simpler to apply, and can also be more robust to variations in the parameters.
-
Hence, in practice, '''linear decoders''' are often used instead. For a linear decoder, the activation function of the output unit is simply the identity function. Formally, to reconstruct the input from the features using a linear decoder, we simply set <math>\hat{x} = W^{(2)}a + b^{(2)}</math> instead, without applying the sigmoid function. Now the reconstructed output <math>\hat{x}</math> is a linear function of the activations of the hidden units, which means that by varying <math>W</math>, each output unit <math>\hat{x}</math> can be made to produce any activation in <math>(-\infty, \infty)</math>. This allows us to train the sparse autoencoder on any input that takes on real values without any additional pre-processing. (Note that the hidden units are '''still sigmoid units''', that is, <math>a = \sigma(W^{(1)}*x + b^{(1)})</math>, where <math>x</math> is the input, and <math>W^{(1)}</math> and <math>b^{(1)}</math> are the weight and bias terms for the hidden units)
+
Recall that each neuron (in the output layer) computed the following:
-
Of course, now that we have changed the activation function of the output units, the gradients of the output units must be changed accordingly. Recall that for each output unit, we set the error as follows:
+
<math>
 +
\begin{align}
 +
z^{(3)} &= W^{(2)} a^{(2)} + b^{(2)} \\
 +
a^{(3)} &= f(z^{(3)})
 +
\end{align}
 +
</math>
 +
 
 +
where <math>a^{(3)}</math> is the output.  In the autoencoder, <math>a^{(3)}</math> is our approximate reconstruction of the input <math>x = a^{(1)}</math>.
 +
 
 +
Because we used a sigmoid activation function for <math>f(z^{(3)})</math>, we needed to constrain or scale the inputs to be in the range <math>[0,1]</math>,
 +
since the sigmoid function outputs numbers in the range <math>[0,1]</math>.
 +
While some datasets like MNIST fit well with this scaling of the output, this can sometimes be awkward to satisfy. For example, if one uses PCA whitening, the input is
 +
no longer constrained to <math>[0,1]</math> and it's not clear what the best way is to scale the data to ensure it fits into the constrained range.
 +
 
 +
 
 +
== Linear Decoder ==
 +
 
 +
One easy fix for this problem is to set <math>a^{(3)} = z^{(3)}</math>.  Formally, this is achieved by having the output
 +
nodes use an activation function that's the identity function <math>f(z) = z</math>, so that <math>a^{(3)} = f(z^{(3)}) = z^{(3)}</math>.
 +
This particular activation function <math>f(\cdot)</math> is called the '''linear activation function''' (though perhaps
 +
"identity activation function" would have been a better name).  Note however that in the ''hidden'' layer of the network, we still use a sigmoid (or tanh) activation function,
 +
so that the hidden unit activations are given by (say) <math>\textstyle a^{(2)} = \sigma(W^{(1)}x + b^{(1)})</math>, where <math>\sigma(\cdot)</math> is the sigmoid function,
 +
<math>x</math> is the input, and <math>W^{(1)}</math> and <math>b^{(1)}</math> are the weight and bias terms for the hidden units.
 +
It is only in the ''output'' layer that we use the linear activation function. 
 +
 
 +
An autoencoder in this configuration--with a sigmoid (or tanh) hidden layer and a linear output layer--is called a '''linear decoder'''.
 +
In this model, we have <math>\hat{x} = a^{(3)} = z^{(3)} = W^{(2)}a + b^{(2)}</math>. Because the output <math>\hat{x} </math> is a now linear function of the hidden unit activations, by varying <math>W^{(2)}</math>, each output unit <math>a^{(3)}</math> can be made to produce values greater than 1 or less than 0 as well.  This allows us to train the sparse autoencoder real-valued inputs without needing to pre-scale every example to a specific range. 
 +
 
 +
Since we have changed the activation function of the output units, the gradients of the output units also change. Recall that for each output unit, we had set set the error terms as follows:
:<math>
:<math>
\begin{align}
\begin{align}
-
\delta_i
+
\delta_i^{(3)}
= \frac{\partial}{\partial z_i} \;\;
= \frac{\partial}{\partial z_i} \;\;
-
         \frac{1}{2} \left\|y - \hat{x}\right\|^2 = - (y_i - \hat{x}_i) \cdot f'(z_i)
+
         \frac{1}{2} \left\|y - \hat{x}\right\|^2 = - (y_i - \hat{x}_i) \cdot f'(z_i^{(3)})
\end{align}
\end{align}
</math>
</math>
-
(where <math>y = x</math> is the desired output, <math>\hat{x}</math> is the reconstructed output of our autoencoder, <math>z</math> is the input to the the output units, and <math>f(x)</math> is our activation function)
+
where <math>y = x</math> is the desired output, <math>\hat{x}</math> is the output of our autoencoder, and <math>f(\cdot)</math> is our activation function.  Because in
-
 
+
the output layer we now have <math>f(z) = z</math>, that implies <math>f'(z) = 1</math> and thus
-
Since the activation function for the output units for a linear decoder is just the identity function, the above reduces to:
+
the above now simplifies to:
:<math>
:<math>
\begin{align}
\begin{align}
-
\delta_i = - (y_i - \hat{x}_i) \cdot z_i
+
\delta_i^{(3)} = - (y_i - \hat{x}_i)
\end{align}
\end{align}
</math>
</math>
 +
 +
Of course, when using backpropagation to compute the error terms for the ''hidden'' layer:
 +
:<math>
 +
\begin{align}
 +
\delta^{(2)} &= \left( (W^{(2)})^T\delta^{(3)}\right) \bullet f'(z^{(2)})
 +
\end{align}
 +
</math>
 +
Because the hidden layer is using a sigmoid (or tanh) activation <math>f</math>, in the equation above <math>f'(\cdot)</math> should still be the
 +
derivative of the sigmoid (or tanh) function.
 +
 +
 +
{{Languages|线性解码器|中文}}

Latest revision as of 04:06, 8 April 2013

Personal tools