Self-Taught Learning to Deep Networks

From Ufldl

Jump to: navigation, search
 
Line 1: Line 1:
-
 
In the previous section, you used an autoencoder to learn features that were then fed as input  
In the previous section, you used an autoencoder to learn features that were then fed as input  
to a softmax or logistic regression classifier.  In that method, the features were learned using
to a softmax or logistic regression classifier.  In that method, the features were learned using
Line 5: Line 4:
the learned features using labeled data.  When you have a large amount of labeled
the learned features using labeled data.  When you have a large amount of labeled
training data, this can significantly improve your classifier's performance.
training data, this can significantly improve your classifier's performance.
-
 
In self-taught learning, we first trained a sparse autoencoder on the unlabeled data.  Then,  
In self-taught learning, we first trained a sparse autoencoder on the unlabeled data.  Then,  
Line 11: Line 9:
features <math>\textstyle a</math>.  This is illustrated in the following diagram:  
features <math>\textstyle a</math>.  This is illustrated in the following diagram:  
-
[[File:STL_SparseAE_Features.png|200px]]
+
[[File:STL_SparseAE_Features.png|300px]]
We are interested in solving a classification task, where our goal is to
We are interested in solving a classification task, where our goal is to
Line 22: Line 20:
To illustrate this step, similar to [[Neural Networks|our earlier notes]], we can draw our logistic regression unit (shown in orange) as follows:
To illustrate this step, similar to [[Neural Networks|our earlier notes]], we can draw our logistic regression unit (shown in orange) as follows:
-
[[File:STL_Logistic_Classifier.png|400px]]
+
::::[[File:STL_Logistic_Classifier.png|380px]]
Now, consider the overall classifier (i.e., the input-output mapping) that we have learned  
Now, consider the overall classifier (i.e., the input-output mapping) that we have learned  
Line 73: Line 71:
only a relatively small labeled training set, then fine-tuning is significantly less likely to
only a relatively small labeled training set, then fine-tuning is significantly less likely to
help.
help.
 +
 +
 +
{{CNN}}
 +
 +
 +
{{Languages|从自我学习到深层网络|中文}}

Latest revision as of 13:29, 7 April 2013

Personal tools