Exercise: Implement deep networks for digit classification

From Ufldl

Jump to: navigation, search
(Step 4: Implement fine-tuning)
(Step 2: Train the data on the second stacked autoencoder)
Line 33: Line 33:
We first forward propagate the training set through the first autoencoder (using <tt>feedForwardAutoencoder.m</tt> that you completed in [[Exercise:Self-Taught_Learning]]) to obtain hidden unit activations. These activations are then used to train the second sparse autoencoder. Since this is just an adapted application of a standard autoencoder, it should run similarly with the first. Complete this part of the code so as to learn a first layer of features using your <tt>sparseAutoencoderCost.m</tt> and minFunc.
We first forward propagate the training set through the first autoencoder (using <tt>feedForwardAutoencoder.m</tt> that you completed in [[Exercise:Self-Taught_Learning]]) to obtain hidden unit activations. These activations are then used to train the second sparse autoencoder. Since this is just an adapted application of a standard autoencoder, it should run similarly with the first. Complete this part of the code so as to learn a first layer of features using your <tt>sparseAutoencoderCost.m</tt> and minFunc.
 +
 +
This part of the exercise demonstrates the idea of greedy layerwise training with the ''same'' learning algorithm reapplied multiple times.
=== Step 3: Train the softmax classifier on the L2 features ===
=== Step 3: Train the softmax classifier on the L2 features ===

Revision as of 00:31, 12 May 2011

Personal tools