Exercise: Implement deep networks for digit classification

From Ufldl

Jump to: navigation, search
(Step 5: Test the model: Changed accuracy figures)
(Step 4: Implement fine-tuning)
Line 42: Line 42:
=== Step 4: Implement fine-tuning ===
=== Step 4: Implement fine-tuning ===
-
To implement fine tuning, we need to consider all three layers as a single model. Implement <tt>stackedAECost.m</tt> to return the cost and gradient of the model. The cost function should be as defined as the log likelihood and a gradient decay term. The gradient should be computed using [[Backpropagation Algorithm | back-propogation as discussed earlier]]. The predictions should consist of the activations of the output layer of the softmax model.
+
To implement fine tuning, we need to consider all three layers as a single model. Implement <tt>stackedAECost.m</tt> to return the cost and gradient of the model. The cost function should be as defined as the log likelihood and a gradient decay term. The gradient should be computed using [[Backpropagation Algorithm | back-propagation as discussed earlier]]. The predictions should consist of the activations of the output layer of the softmax model.
To help you check that your implementation is correct, you should also check your gradients on a synthetic small dataset. We have implemented <tt>checkStackedAECost.m</tt> to help you check your gradients. If this checks passes, you will have implemented fine-tuning correctly.
To help you check that your implementation is correct, you should also check your gradients on a synthetic small dataset. We have implemented <tt>checkStackedAECost.m</tt> to help you check your gradients. If this checks passes, you will have implemented fine-tuning correctly.

Revision as of 23:14, 21 May 2011

Personal tools