Exercise: Implement deep networks for digit classification

From Ufldl

Jump to: navigation, search
(Initial commit)
(Edited overview. Previous anonymous edit was mine.)
Line 1: Line 1:
-
==Stacked autoencoders for digit classification==
+
===Overview===
-
In this problem set, you will use a stacked autoencoder for digit classification. Th
+
In this exercise, you will use a stacked autoencoder for digit classification. This exercise is very similar to the self-taught learning exercise, in which we trained a digit classifier using a autoencoder layer followed by a softmax layer. The only difference in this exercise is that we will be using two autoencoder layers instead of one.
 +
 
 +
The code you have already implemented will allow you to stack various layers and perform layer-wise training. However, to perform fine-tuning, you will need to implement back-propogation as well. We will see that fine-tuning significantly improves the model's performance.
In the file <tt>stacked_ae_exercise.zip</tt>, we have provided some starter code. You will need to edit <tt>stackedAECost.m</tt>. You should also read <tt>stackedAETrain.m</tt> and ensure that you understand the steps.
In the file <tt>stacked_ae_exercise.zip</tt>, we have provided some starter code. You will need to edit <tt>stackedAECost.m</tt>. You should also read <tt>stackedAETrain.m</tt> and ensure that you understand the steps.

Revision as of 18:56, 21 April 2011

Personal tools