# Linear Decoders

(Difference between revisions)
 Revision as of 01:10, 23 May 2011 (view source)Zellyn (Talk | contribs) (→Linear Decoder)← Older edit Revision as of 01:41, 26 May 2011 (view source)Newer edit → Line 1: Line 1: == Sparse Autoencoder Recap == == Sparse Autoencoder Recap == - In the sparse autoencoder implementation, we had 3 layers of neurons: the input layer, a hidden layer and an output layer. Recall that each neuron (in the output layer) computes the following: + In the sparse autoencoder implementation, we had 3 layers of neurons: the input layer, a hidden layer and an output layer. In that network, every neuron used the same activation function. + In these nodes, we describe a modified version of the autoencoder which uses a simpler activation function at the output layer, but which is sometimes simpler to apply. + + Recall that each neuron (in the output layer) computes the following: $[itex] Line 10: Line 13:$ [/itex] - where $a^{(3)}$ is the reconstruction of the input (layer $a^{(1)}$). + where $a^{(3)}$ is the output.  In the autoencoder, this is our approximate reconstruction of the input (layer $a^{(1)}$). - Notice that due to the choice of the sigmoid function for $f(z^{(3)})$ we need to constrain the inputs to be in the range [0,1]. + Because we used a sigmoid activation function for $f(z^{(3)})$, we needed to constrain or scale the inputs to be in the range [0,1], + since the sigmoid function outputs numbers in the range $[0,1]$. + While some datasets like MNIST fit well with this scaling, this can sometimes be awkward to satisfy. For example, if one uses PCA whitening, the input is + no longer constrained to $[0,1]$ and it's not clear what the best way is to scale the data to ensure it fits into the constrained range. - While some datasets like MNIST fit well into this criteria, this hard constraint can sometimes be awkward to satisfy. For example, if one uses PCA whitening, the input is no longer constrained to [0,1] and it is not clear what kind of scaling is appropriate to fit the data into the constrained range. == Linear Decoder == == Linear Decoder == - One easy fix for the fore-mentioned problem is to use a ''linear-decoder'', that is, we set $a^{(3)} = z^{(3)}$. + One easy fix for this problem is to set $a^{(3)} = z^{(3)}$.  Formally, this is achieved by having the output + nodes use an activation function that's the identity function $f(z) = z$.  This is sometimes called the '''linear activation function''' (though perhaps + "identity activation function" would have been a better name).  Note however that in the ''hidden'' layer of the network, we still use a sigmoid (or tanh) activation function, + so that the hidden units are (say) $a^{(2)} = \sigma(W^{(1)}*x + b^{(1)})$, where $\sigma(\cdot)$ is the sigmoid function, + $x$ is the input, and $W^{(1)}$ and $b^{(1)}$ are the weights and bias terms for the hidden units. + It is only in the ''output'' layer that we use the linear activation function. - For a linear decoder, the activation function of the output unit is effectively the identity function. Formally, to reconstruct the input from the features using a linear decoder, we simply set $\hat{x} = a^{(3)} = z^{(3)} = W^{(2)}a + b^{(2)}$ instead, without applying the sigmoid function. Now the reconstructed output $\hat{x}$ is a linear function of the activations of the hidden units, which means that by varying $W$, each output unit $\hat{x}$ can be made to produce any activation without the previous constraints. This allows us to train the sparse autoencoder on any input that takes on real values without any additional pre-processing. (Note that the hidden units are '''still sigmoid units''', that is, $a = \sigma(W^{(1)}*x + b^{(1)})$, where $x$ is the input, and $W^{(1)}$ and $b^{(1)}$ are the weight and bias terms for the hidden units) + An autoencoder in this configuration--with a sigmoid (or tanh) hidden layer and a linear output layer--is called a '''linear decoder'''. + In this model, we have $\hat{x} = a^{(3)} = z^{(3)} = W^{(2)}a + b^{(2)}$. Because the output $\hat{x}$ is a now linear function of the hidden unit activations, by varying $W^{(2)}$, each output unit $a^{(3)}$ can be made to produce values greater than 1 or less than 0 as well. This allows us to train the sparse autoencoder real-valued inputs without needing to pre-scale every example to a specific range. - Of course, now that we have changed the activation function of the output units, the gradients of the output units must be changed accordingly. Recall that for each output unit, we set the error as follows: + Since we have changed the activation function of the output units, the gradients of the output units also change. Recall that for each output unit, we had set set the error terms as follows: ::[itex] \begin{align} \begin{align} Line 30: Line 41: \end{align} \end{align} [/itex] - (where $y = x$ is the desired output, $\hat{x}$ is the reconstructed output of our autoencoder, $z$ is the input to the the output units, and $f(x)$ is our activation function) + where $y = x$ is the desired output, $\hat{x}$ is the reconstructed output of our autoencoder, $z$ is the input to the the output units, and $f(x)$ is our activation function. - + Since the activation function for the output units for a linear decoder is just the identity function, the above now simplifies to: - Since the activation function for the output units for a linear decoder is just the identity function, the above reduces to: + :[itex] :[itex] \begin{align} \begin{align}

## Sparse Autoencoder Recap

In the sparse autoencoder implementation, we had 3 layers of neurons: the input layer, a hidden layer and an output layer. In that network, every neuron used the same activation function. In these nodes, we describe a modified version of the autoencoder which uses a simpler activation function at the output layer, but which is sometimes simpler to apply.

Recall that each neuron (in the output layer) computes the following:

\begin{align} z^{(3)} &= W^{(2)} a^{(2)} + b^{(2)} \\ a^{(3)} &= f(z^{(3)}) \end{align}

where a(3) is the output. In the autoencoder, this is our approximate reconstruction of the input (layer a(1)).

Because we used a sigmoid activation function for f(z(3)), we needed to constrain or scale the inputs to be in the range [0,1], since the sigmoid function outputs numbers in the range [0,1]. While some datasets like MNIST fit well with this scaling, this can sometimes be awkward to satisfy. For example, if one uses PCA whitening, the input is no longer constrained to [0,1] and it's not clear what the best way is to scale the data to ensure it fits into the constrained range.

## Linear Decoder

One easy fix for this problem is to set a(3) = z(3). Formally, this is achieved by having the output nodes use an activation function that's the identity function f(z) = z. This is sometimes called the linear activation function (though perhaps "identity activation function" would have been a better name). Note however that in the hidden layer of the network, we still use a sigmoid (or tanh) activation function, so that the hidden units are (say) a(2) = σ(W(1) * x + b(1)), where $\sigma(\cdot)$ is the sigmoid function, x is the input, and W(1) and b(1) are the weights and bias terms for the hidden units. It is only in the output layer that we use the linear activation function.

An autoencoder in this configuration--with a sigmoid (or tanh) hidden layer and a linear output layer--is called a linear decoder. In this model, we have $\hat{x} = a^{(3)} = z^{(3)} = W^{(2)}a + b^{(2)}$. Because the output $\hat{x}$ is a now linear function of the hidden unit activations, by varying W(2), each output unit a(3) can be made to produce values greater than 1 or less than 0 as well. This allows us to train the sparse autoencoder real-valued inputs without needing to pre-scale every example to a specific range.

Since we have changed the activation function of the output units, the gradients of the output units also change. Recall that for each output unit, we had set set the error terms as follows:

\begin{align} \delta_i = \frac{\partial}{\partial z_i} \;\; \frac{1}{2} \left\|y - \hat{x}\right\|^2 = - (y_i - \hat{x}_i) \cdot f'(z_i) \end{align}

where y = x is the desired output, $\hat{x}$ is the reconstructed output of our autoencoder, z is the input to the the output units, and f(x) is our activation function. Since the activation function for the output units for a linear decoder is just the identity function, the above now simplifies to:

\begin{align} \delta_i = - (y_i - \hat{x}_i) \end{align}