http://ufldl.stanford.edu/wiki/index.php?title=Special:Contributions/216.239.45.19&feed=atom&limit=50&target=216.239.45.19&year=&month=Ufldl - User contributions [en]2024-03-29T00:37:34ZFrom UfldlMediaWiki 1.16.2http://ufldl.stanford.edu/wiki/index.php/Linear_DecodersLinear Decoders2011-05-26T02:03:45Z<p>216.239.45.19: /* Linear Decoder */</p>
<hr />
<div>== Sparse Autoencoder Recap ==<br />
<br />
In the sparse autoencoder, we had 3 layers of neurons: an input layer, a hidden layer and an output layer. In our previous description<br />
of autoencoders (and of neural networks), every neuron in the neural network used the same activation function.<br />
In these notes, we describe a modified version of the autoencoder in which some of the neurons use a different activation function.<br />
This will result in a model that is sometimes simpler to apply, and can also be more robust to variations in the parameters. <br />
<br />
Recall that each neuron (in the output layer) computed the following:<br />
<br />
<math><br />
\begin{align}<br />
z^{(3)} &= W^{(2)} a^{(2)} + b^{(2)} \\<br />
a^{(3)} &= f(z^{(3)})<br />
\end{align}<br />
</math><br />
<br />
where <math>a^{(3)}</math> is the output. In the autoencoder, <math>a^{(3)}</math> is our approximate reconstruction of the input <math>x = a^{(1)}</math>. <br />
<br />
Because we used a sigmoid activation function for <math>f(z^{(3)})</math>, we needed to constrain or scale the inputs to be in the range <math>[0,1]</math>, <br />
since the sigmoid function outputs numbers in the range <math>[0,1]</math>. <br />
While some datasets like MNIST fit well with this scaling of the output, this can sometimes be awkward to satisfy. For example, if one uses PCA whitening, the input is <br />
no longer constrained to <math>[0,1]</math> and it's not clear what the best way is to scale the data to ensure it fits into the constrained range.<br />
<br />
<br />
== Linear Decoder ==<br />
<br />
One easy fix for this problem is to set <math>a^{(3)} = z^{(3)}</math>. Formally, this is achieved by having the output<br />
nodes use an activation function that's the identity function <math>f(z) = z</math>, so that <math>a^{(3)} = f(z^{(3)}) = z^{(3)}</math>. <br />
This particular activation function <math>f(\cdot)</math> is called the '''linear activation function''' (though perhaps<br />
"identity activation function" would have been a better name). Note however that in the ''hidden'' layer of the network, we still use a sigmoid (or tanh) activation function,<br />
so that the hidden unit activations are given by (say) <math>\textstyle a^{(2)} = \sigma(W^{(1)}x + b^{(1)})</math>, where <math>\sigma(\cdot)</math> is the sigmoid function, <br />
<math>x</math> is the input, and <math>W^{(1)}</math> and <math>b^{(1)}</math> are the weight and bias terms for the hidden units. <br />
It is only in the ''output'' layer that we use the linear activation function. <br />
<br />
An autoencoder in this configuration--with a sigmoid (or tanh) hidden layer and a linear output layer--is called a '''linear decoder'''. <br />
In this model, we have <math>\hat{x} = a^{(3)} = z^{(3)} = W^{(2)}a + b^{(2)}</math>. Because the output <math>\hat{x} </math> is a now linear function of the hidden unit activations, by varying <math>W^{(2)}</math>, each output unit <math>a^{(3)}</math> can be made to produce values greater than 1 or less than 0 as well. This allows us to train the sparse autoencoder real-valued inputs without needing to pre-scale every example to a specific range. <br />
<br />
Since we have changed the activation function of the output units, the gradients of the output units also change. Recall that for each output unit, we had set set the error terms as follows:<br />
:<math><br />
\begin{align}<br />
\delta_i^{(3)}<br />
= \frac{\partial}{\partial z_i} \;\;<br />
\frac{1}{2} \left\|y - \hat{x}\right\|^2 = - (y_i - \hat{x}_i) \cdot f'(z_i^{(3)})<br />
\end{align}<br />
</math><br />
where <math>y = x</math> is the desired output, <math>\hat{x}</math> is the output of our autoencoder, and <math>f(\cdot)</math> is our activation function. Because in<br />
the output layer we now have <math>f(z) = z</math>, that implies <math>f'(z) = 1</math> and thus <br />
the above now simplifies to:<br />
:<math><br />
\begin{align}<br />
\delta_i^{(3)} = - (y_i - \hat{x}_i)<br />
\end{align}<br />
</math><br />
<br />
Of course, when using backpropagation to compute the error terms for the ''hidden'' layer:<br />
:<math><br />
\begin{align}<br />
\delta^{(2)} &= \left( (W^{(2)})^T\delta^{(3)}\right) \bullet f'(z^{(2)})<br />
\end{align}<br />
</math> <br />
Because the hidden layer is using a sigmoid (or tanh) activation <math>f</math>, in the equation above <math>f'(\cdot)</math> should still be the<br />
derivative of the sigmoid (or tanh) function.</div>216.239.45.19http://ufldl.stanford.edu/wiki/index.php/Linear_DecodersLinear Decoders2011-05-26T02:03:15Z<p>216.239.45.19: </p>
<hr />
<div>== Sparse Autoencoder Recap ==<br />
<br />
In the sparse autoencoder, we had 3 layers of neurons: an input layer, a hidden layer and an output layer. In our previous description<br />
of autoencoders (and of neural networks), every neuron in the neural network used the same activation function.<br />
In these notes, we describe a modified version of the autoencoder in which some of the neurons use a different activation function.<br />
This will result in a model that is sometimes simpler to apply, and can also be more robust to variations in the parameters. <br />
<br />
Recall that each neuron (in the output layer) computed the following:<br />
<br />
<math><br />
\begin{align}<br />
z^{(3)} &= W^{(2)} a^{(2)} + b^{(2)} \\<br />
a^{(3)} &= f(z^{(3)})<br />
\end{align}<br />
</math><br />
<br />
where <math>a^{(3)}</math> is the output. In the autoencoder, <math>a^{(3)}</math> is our approximate reconstruction of the input <math>x = a^{(1)}</math>. <br />
<br />
Because we used a sigmoid activation function for <math>f(z^{(3)})</math>, we needed to constrain or scale the inputs to be in the range <math>[0,1]</math>, <br />
since the sigmoid function outputs numbers in the range <math>[0,1]</math>. <br />
While some datasets like MNIST fit well with this scaling of the output, this can sometimes be awkward to satisfy. For example, if one uses PCA whitening, the input is <br />
no longer constrained to <math>[0,1]</math> and it's not clear what the best way is to scale the data to ensure it fits into the constrained range.<br />
<br />
<br />
== Linear Decoder ==<br />
<br />
One easy fix for this problem is to set <math>a^{(3)} = z^{(3)}</math>. Formally, this is achieved by having the output<br />
nodes use an activation function that's the identity function <math>f(z) = z</math>, so that <math>a^{(3)} = f(z^{(3)}) = z^{(3)}</math>. <br />
This particular activation function <math>f(\cdot)</math> is called the '''linear activation function''' (though perhaps<br />
"identity activation function" would have been a better name). Note however that in the ''hidden'' layer of the network, we still use a sigmoid (or tanh) activation function,<br />
so that the hidden unit activations are given by (say) <math>\textstyle a^{(2)} = \sigma(W^{(1)}x + b^{(1)})</math>, where <math>\sigma(\cdot)</math> is the sigmoid function, <br />
<math>x</math> is the input, and <math>W^{(1)}</math> and <math>b^{(1)}</math> are the weight and bias terms for the hidden units. <br />
It is only in the ''output'' layer that we use the linear activation function. <br />
<br />
An autoencoder in this configuration--with a sigmoid (or tanh) hidden layer and a linear output layer--is called a '''linear decoder'''. <br />
In this model, we have <math>\hat{x} = a^{(3)} = z^{(3)} = W^{(2)}a + b^{(2)}</math>. Because the output <math>\hat{x} </math> is a now linear function of the hidden unit activations, by varying <math>W^{(2)}</math>, each output unit <math>a^{(3)}</math> can be made to produce values greater than 1 or less than 0 as well. This allows us to train the sparse autoencoder real-valued inputs without needing to pre-scale every example to a specific range. <br />
<br />
Since we have changed the activation function of the output units, the gradients of the output units also change. Recall that for each output unit, we had set set the error terms as follows:<br />
:<math><br />
\begin{align}<br />
\delta_i^{(3)}<br />
= \frac{\partial}{\partial z_i} \;\;<br />
\frac{1}{2} \left\|y - \hat{x}\right\|^2 = - (y_i - \hat{x}_i) \cdot f'(z_i^{(3)})<br />
\end{align}<br />
</math><br />
where <math>y = x</math> is the desired output, <math>\hat{x}</math> is the output of our autoencoder, and <math>f(\cdot)</math> is our activation function. Because in<br />
the output layer we now have <math>f(z) = z</math>, that implies <math>f'(z) = 1</math> and thus <br />
the above now simplifies to:<br />
:<math><br />
\begin{align}<br />
\delta_i^{(3)} = - (y_i - \hat{x}_i)<br />
\end{align}<br />
</math><br />
<br />
Of course, when using backpropagation to compute the error terms for the hidden layer:<br />
:<math><br />
\begin{align}<br />
\delta^{(2)} &= \left( (W^{(2)})^T\delta^{(3)}\right) \bullet f'(z^{(2)})<br />
\end{align}<br />
</math> <br />
Because the hidden layer is using a sigmoid (or tanh) activation <math>f</math>, in the equation above <math>f'(\cdot)</math> should still be the<br />
derivative of the sigmoid (or tanh) function.</div>216.239.45.19http://ufldl.stanford.edu/wiki/index.php/Linear_DecodersLinear Decoders2011-05-26T01:49:33Z<p>216.239.45.19: </p>
<hr />
<div>== Sparse Autoencoder Recap ==<br />
<br />
In the sparse autoencoder, we had 3 layers of neurons: an input layer, a hidden layer and an output layer. In our previous description<br />
of autoencoders (and of neural networks), every neuron in the neural network used the same activation function.<br />
In these notes, we describe a modified version of the autoencoder in which some of the neurons use a different activation function.<br />
This will result in a model that is sometimes simpler to apply, and can also be more robust to variations in the parameters. <br />
<br />
Recall that each neuron (in the output layer) computed the following:<br />
<br />
<math><br />
\begin{align}<br />
z^{(3)} &= W^{(2)} a^{(2)} + b^{(2)} \\<br />
a^{(3)} &= f(z^{(3)})<br />
\end{align}<br />
</math><br />
<br />
where <math>a^{(3)}</math> is the output. In the autoencoder, <math>a^{(3)}</mat> is our approximate reconstruction of the input <math>x = a^{(1)}</math>. <br />
<br />
Because we used a sigmoid activation function for <math>f(z^{(3)})</math>, we needed to constrain or scale the inputs to be in the range <math>[0,1]</math>, <br />
since the sigmoid function outputs numbers in the range <math>[0,1]</math>. <br />
While some datasets like MNIST fit well with this scaling of the output, this can sometimes be awkward to satisfy. For example, if one uses PCA whitening, the input is <br />
no longer constrained to <math>[0,1]</math> and it's not clear what the best way is to scale the data to ensure it fits into the constrained range.<br />
<br />
<br />
== Linear Decoder ==<br />
<br />
One easy fix for this problem is to set <math>a^{(3)} = z^{(3)}</math>. Formally, this is achieved by having the output<br />
nodes use an activation function that's the identity function <math>f(z) = z</math>, so that <math>a^{(3)} = f(z^{(3)}) = z^{(3)}</math>. <br />
This activation function <math>f(\cdot)</math> is called the '''linear activation function''' (though perhaps<br />
"identity activation function" would have been a better name). Note however that in the ''hidden'' layer of the network, we still use a sigmoid (or tanh) activation function,<br />
so that the hidden units are (say) <math>a^{(2)} = \sigma(W^{(1)}*x + b^{(1)})</math>, where <math>\sigma(\cdot)</math> is the sigmoid function, <br />
<math>x</math> is the input, and <math>W^{(1)}</math> and <math>b^{(1)}</math> are the weights and bias terms for the hidden units. <br />
It is only in the ''output'' layer that we use the linear activation function. <br />
<br />
An autoencoder in this configuration--with a sigmoid (or tanh) hidden layer and a linear output layer--is called a '''linear decoder'''. <br />
In this model, we have <math>\hat{x} = a^{(3)} = z^{(3)} = W^{(2)}a + b^{(2)}</math>. Because the output <math>\hat{x}</math> is a now linear function of the hidden unit activations, by varying <math>W^{(2)}</math>, each output unit <math>a^{(3)}</math> can be made to produce values greater than 1 or less than 0 as well. This allows us to train the sparse autoencoder real-valued inputs without needing to pre-scale every example to a specific range. <br />
<br />
Since we have changed the activation function of the output units, the gradients of the output units also change. Recall that for each output unit, we had set set the error terms as follows:<br />
:<math><br />
\begin{align}<br />
\delta_i<br />
= \frac{\partial}{\partial z_i} \;\;<br />
\frac{1}{2} \left\|y - \hat{x}\right\|^2 = - (y_i - \hat{x}_i) \cdot f'(z_i)<br />
\end{align}<br />
</math><br />
where <math>y = x</math> is the desired output, <math>\hat{x}</math> is the reconstructed output of our autoencoder, <math>z</math> is the input to the the output units, and <math>f(x)</math> is our activation function.<br />
Since the activation function for the output units for a linear decoder is just the identity function, the above now simplifies to:<br />
:<math><br />
\begin{align}<br />
\delta_i = - (y_i - \hat{x}_i)<br />
\end{align}<br />
</math></div>216.239.45.19http://ufldl.stanford.edu/wiki/index.php/Linear_DecodersLinear Decoders2011-05-26T01:46:02Z<p>216.239.45.19: </p>
<hr />
<div>== Sparse Autoencoder Recap ==<br />
<br />
In the sparse autoencoder implementation, we had 3 layers of neurons: an input layer, a hidden layer and an output layer. In our previous description<br />
of autoencoders (and of neural networks), every neuron used the same activation function.<br />
In these notes, we describe a modified version of the autoencoder in which some of the neurons use a different activation function.<br />
This will result in a model that is sometimes simpler to apply. <br />
<br />
Recall that each neuron (in the output layer) computes the following:<br />
<br />
<math><br />
\begin{align}<br />
z^{(3)} &= W^{(2)} a^{(2)} + b^{(2)} \\<br />
a^{(3)} &= f(z^{(3)})<br />
\end{align}<br />
</math><br />
<br />
where <math>a^{(3)}</math> is the output. In the autoencoder, this is our approximate reconstruction of the input (layer <math>a^{(1)}</math>).<br />
<br />
Because we used a sigmoid activation function for <math>f(z^{(3)})</math>, we needed to constrain or scale the inputs to be in the range <math>[0,1]</math>, <br />
since the sigmoid function outputs numbers in the range <math>[0,1]</math>. <br />
While some datasets like MNIST fit well with this scaling, this can sometimes be awkward to satisfy. For example, if one uses PCA whitening, the input is <br />
no longer constrained to <math>[0,1]</math> and it's not clear what the best way is to scale the data to ensure it fits into the constrained range.<br />
<br />
<br />
== Linear Decoder ==<br />
<br />
One easy fix for this problem is to set <math>a^{(3)} = z^{(3)}</math>. Formally, this is achieved by having the output<br />
nodes use an activation function that's the identity function <math>f(z) = z</math>. This is sometimes called the '''linear activation function''' (though perhaps<br />
"identity activation function" would have been a better name). Note however that in the ''hidden'' layer of the network, we still use a sigmoid (or tanh) activation function,<br />
so that the hidden units are (say) <math>a^{(2)} = \sigma(W^{(1)}*x + b^{(1)})</math>, where <math>\sigma(\cdot)</math> is the sigmoid function, <br />
<math>x</math> is the input, and <math>W^{(1)}</math> and <math>b^{(1)}</math> are the weights and bias terms for the hidden units. <br />
It is only in the ''output'' layer that we use the linear activation function. <br />
<br />
An autoencoder in this configuration--with a sigmoid (or tanh) hidden layer and a linear output layer--is called a '''linear decoder'''. <br />
In this model, we have <math>\hat{x} = a^{(3)} = z^{(3)} = W^{(2)}a + b^{(2)}</math>. Because the output <math>\hat{x}</math> is a now linear function of the hidden unit activations, by varying <math>W^{(2)}</math>, each output unit <math>a^{(3)}</math> can be made to produce values greater than 1 or less than 0 as well. This allows us to train the sparse autoencoder real-valued inputs without needing to pre-scale every example to a specific range. <br />
<br />
Since we have changed the activation function of the output units, the gradients of the output units also change. Recall that for each output unit, we had set set the error terms as follows:<br />
:<math><br />
\begin{align}<br />
\delta_i<br />
= \frac{\partial}{\partial z_i} \;\;<br />
\frac{1}{2} \left\|y - \hat{x}\right\|^2 = - (y_i - \hat{x}_i) \cdot f'(z_i)<br />
\end{align}<br />
</math><br />
where <math>y = x</math> is the desired output, <math>\hat{x}</math> is the reconstructed output of our autoencoder, <math>z</math> is the input to the the output units, and <math>f(x)</math> is our activation function.<br />
Since the activation function for the output units for a linear decoder is just the identity function, the above now simplifies to:<br />
:<math><br />
\begin{align}<br />
\delta_i = - (y_i - \hat{x}_i)<br />
\end{align}<br />
</math></div>216.239.45.19http://ufldl.stanford.edu/wiki/index.php/Linear_DecodersLinear Decoders2011-05-26T01:41:20Z<p>216.239.45.19: </p>
<hr />
<div>== Sparse Autoencoder Recap ==<br />
<br />
In the sparse autoencoder implementation, we had 3 layers of neurons: the input layer, a hidden layer and an output layer. In that network, every neuron used the same activation function.<br />
In these nodes, we describe a modified version of the autoencoder which uses a simpler activation function at the output layer, but which is sometimes simpler to apply. <br />
<br />
Recall that each neuron (in the output layer) computes the following:<br />
<br />
<math><br />
\begin{align}<br />
z^{(3)} &= W^{(2)} a^{(2)} + b^{(2)} \\<br />
a^{(3)} &= f(z^{(3)})<br />
\end{align}<br />
</math><br />
<br />
where <math>a^{(3)}</math> is the output. In the autoencoder, this is our approximate reconstruction of the input (layer <math>a^{(1)}</math>).<br />
<br />
Because we used a sigmoid activation function for <math>f(z^{(3)})</math>, we needed to constrain or scale the inputs to be in the range <math>[0,1]</math>, <br />
since the sigmoid function outputs numbers in the range <math>[0,1]</math>. <br />
While some datasets like MNIST fit well with this scaling, this can sometimes be awkward to satisfy. For example, if one uses PCA whitening, the input is <br />
no longer constrained to <math>[0,1]</math> and it's not clear what the best way is to scale the data to ensure it fits into the constrained range.<br />
<br />
<br />
== Linear Decoder ==<br />
<br />
One easy fix for this problem is to set <math>a^{(3)} = z^{(3)}</math>. Formally, this is achieved by having the output<br />
nodes use an activation function that's the identity function <math>f(z) = z</math>. This is sometimes called the '''linear activation function''' (though perhaps<br />
"identity activation function" would have been a better name). Note however that in the ''hidden'' layer of the network, we still use a sigmoid (or tanh) activation function,<br />
so that the hidden units are (say) <math>a^{(2)} = \sigma(W^{(1)}*x + b^{(1)})</math>, where <math>\sigma(\cdot)</math> is the sigmoid function, <br />
<math>x</math> is the input, and <math>W^{(1)}</math> and <math>b^{(1)}</math> are the weights and bias terms for the hidden units. <br />
It is only in the ''output'' layer that we use the linear activation function. <br />
<br />
An autoencoder in this configuration--with a sigmoid (or tanh) hidden layer and a linear output layer--is called a '''linear decoder'''. <br />
In this model, we have <math>\hat{x} = a^{(3)} = z^{(3)} = W^{(2)}a + b^{(2)}</math>. Because the output <math>\hat{x}</math> is a now linear function of the hidden unit activations, by varying <math>W^{(2)}</math>, each output unit <math>a^{(3)}</math> can be made to produce values greater than 1 or less than 0 as well. This allows us to train the sparse autoencoder real-valued inputs without needing to pre-scale every example to a specific range. <br />
<br />
Since we have changed the activation function of the output units, the gradients of the output units also change. Recall that for each output unit, we had set set the error terms as follows:<br />
:<math><br />
\begin{align}<br />
\delta_i<br />
= \frac{\partial}{\partial z_i} \;\;<br />
\frac{1}{2} \left\|y - \hat{x}\right\|^2 = - (y_i - \hat{x}_i) \cdot f'(z_i)<br />
\end{align}<br />
</math><br />
where <math>y = x</math> is the desired output, <math>\hat{x}</math> is the reconstructed output of our autoencoder, <math>z</math> is the input to the the output units, and <math>f(x)</math> is our activation function.<br />
Since the activation function for the output units for a linear decoder is just the identity function, the above now simplifies to:<br />
:<math><br />
\begin{align}<br />
\delta_i = - (y_i - \hat{x}_i)<br />
\end{align}<br />
</math></div>216.239.45.19http://ufldl.stanford.edu/wiki/index.php/UFLDL_TutorialUFLDL Tutorial2011-04-22T20:46:22Z<p>216.239.45.19: Boldfaced the topic headings.</p>
<hr />
<div>'''Description:''' This tutorial will teach you the main ideas of Unsupervised Feature Learning and Deep Learning. By working through it, you will also get to implement several feature learning/deep learning algorithms, get to see them work for yourself, and learn how to apply/adapt these ideas to new problems.<br />
<br />
This tutorial assumes a basic knowledge of machine learning (specifically, familiarity with the ideas of supervised learning, logistic regression, gradient descent). If you are not familiar with these ideas, we suggest you go to this [http://openclassroom.stanford.edu/MainFolder/CoursePage.php?course=MachineLearning Machine Learning course] and complete<br />
sections II, III, IV (up to Logistic Regression) first. <br />
<br />
<br />
'''Sparse Autoencoder'''<br />
* [[Neural Networks]]<br />
* [[Backpropagation Algorithm]]<br />
* [[Gradient checking and advanced optimization]]<br />
* [[Autoencoders and Sparsity]]<br />
* [[Visualizing a Trained Autoencoder]]<br />
* [[Sparse Autoencoder Notation Summary]] <br />
* [[Exercise:Sparse Autoencoder]]<br />
<br />
<br />
<br />
----<br />
'''Note''': The sections above this line are stable. The sections below are still under construction, and may change without notice. Feel free to browse around however, and feedback/suggestions are welcome. <br />
<br />
'''Vectorized implementation'''<br />
* [[Vectorization]]<br />
* [[Logistic Regression Vectorization Example]]<br />
* [[Neural Network Vectorization]]<br />
* [[Exercise:Vectorization]]<br />
<br />
<br />
'''Preprocessing: PCA and Whitening'''<br />
* [[PCA]]<br />
* [[Whitening]]<br />
* [[Implementing PCA/Whitening]]<br />
* [[Exercise:PCA in 2D]]<br />
* [[Exercise:PCA and Whitening]]<br />
<br />
<br />
'''Softmax Regression'''<br />
* [[Softmax Regression]]<br />
* [[Exercise:Softmax Regression]]<br />
<br />
<br />
'''Self-Taught Learning and Unsupervised Feature Learning''' <br />
* [[Self-Taught Learning]]<br />
* [[Exercise:Self-Taught Learning]]<br />
<br />
<br />
'''Building Deep Networks for Classification'''<br />
* [[Deep Networks: Overview]]<br />
* [[Stacked Autoencoders]]<br />
* [[Fine-tuning Stacked AEs]]<br />
* [[Exercise: Implement deep networks for digit classification]]<br />
<br />
<br />
'''Working with Large Images'''<br />
* [[Feature extraction using convolution]]<br />
* [[Pooling]]<br />
* [[Multiple layers of convolution and pooling]]<br />
<br />
<br />
----<br />
<br />
<br />
'''Advanced Topics''':<br />
<br />
[[Restricted Boltzmann Machines]]<br />
<br />
[[Deep Belief Networks]]<br />
<br />
[[Denoising Autoencoders]]<br />
<br />
[[Sparse Coding]]<br />
<br />
[[K-means]]<br />
<br />
[[Spatial pyramids / Multiscale]]<br />
<br />
[[Slow Feature Analysis]]<br />
<br />
ICA Style Models:<br />
* [[Independent Component Analysis]]<br />
* [[Topographic Independent Component Analysis]]<br />
<br />
[[Tiled Convolution Networks]]<br />
<br />
[[Code]]</div>216.239.45.19