Self-Taught Learning to Deep Networks

From Ufldl

Jump to: navigation, search
Line 6: Line 6:
the learned features using labeled data.  When you have a large amount of labeled
the learned features using labeled data.  When you have a large amount of labeled
training data, this can significantly improve your classifier's performance.
training data, this can significantly improve your classifier's performance.
 +
 +
== Feature Learning pipeline ==
In self-taught learning, we first trained a sparse autoencoder on the unlabeled data.  Then,  
In self-taught learning, we first trained a sparse autoencoder on the unlabeled data.  Then,  
Line 39: Line 41:
trained using logistic regression (or softmax regression).   
trained using logistic regression (or softmax regression).   
-
But the form of our overall/final classifier is clearly just a whole big neural network.  So,
+
== Fine-tuning ==
 +
But now, we notice that the form of our overall/final classifier is clearly just a whole big neural network.  So,
having trained up an initial set of parameters for our model (training the first layer using an  
having trained up an initial set of parameters for our model (training the first layer using an  
autoencoder, and the second layer
autoencoder, and the second layer
Line 52: Line 55:
The effect of fine-tuning is that the labeled data can be used to modify the weights <math>W^{(1)}</math> as
The effect of fine-tuning is that the labeled data can be used to modify the weights <math>W^{(1)}</math> as
well, so that adjustments can be made to the features <math>a</math> extracted by the layer
well, so that adjustments can be made to the features <math>a</math> extracted by the layer
-
of hidden units.
+
of hidden units.  
So far, we have described this process assuming that you used the "replacement" representation, where
So far, we have described this process assuming that you used the "replacement" representation, where

Revision as of 05:54, 13 May 2011

Personal tools