Exercise:Independent Component Analysis

From Ufldl

Jump to: navigation, search
(Step 4a: Orthonormal ICA)
(Backtracking line search)
 
Line 5: Line 5:
In the file <tt>[http://ufldl.stanford.edu/wiki/resources/independent_component_analysis_exercise.zip independent_component_analysis_exercise.zip]</tt> we have provided some starter code. You should write your code at the places indicated "YOUR CODE HERE" in the files.
In the file <tt>[http://ufldl.stanford.edu/wiki/resources/independent_component_analysis_exercise.zip independent_component_analysis_exercise.zip]</tt> we have provided some starter code. You should write your code at the places indicated "YOUR CODE HERE" in the files.
-
For this exercise, you will need to modify '''<tt>OrthonormalICACost.m</tt>''', '''<tt>ReconstructionICACost.m</tt>''' and '''<tt>ICAExercise.m</tt>'''.
+
For this exercise, you will need to modify '''<tt>OrthonormalICACost.m</tt>''' and '''<tt>ICAExercise.m</tt>'''.
=== Dependencies ===
=== Dependencies ===
Line 32: Line 32:
=== Step 3: Implement and check ICA cost functions ===
=== Step 3: Implement and check ICA cost functions ===
-
In this step, you should implement the two ICA cost functions:
+
In this step, you should implement the ICA cost function:
-
<ol>
+
<tt>orthonormalICACost</tt> in <tt>orthonormalICACost.m</tt>, which computes the cost and gradient for the orthonormal ICA objective. Note that the orthonormality constraint is '''not''' enforced in the cost function. It will be enforced by a projection in the gradient descent step, which you will have to complete in step 4.
-
<li><tt>orthonormalICACost</tt> in <tt>orthonormalICACost.m</tt>, which computes the cost and gradient for the orthonormal ICA objective. Note that the orthonormality constraint is '''not''' enforced in the cost function. It will be enforced by a projection in the gradient descent step, which you will have to complete in step 4a.
+
-
<li><tt>reconstructionICACost</tt> in <tt>reconstructionICACost.m</tt>, which computes the cost and gradient for the reconstruction ICA objective.
+
-
</ol>
+
-
When you have implemented the cost functions, you should check the gradients numerically.
+
When you have implemented the cost function, you should check the gradients numerically.
'''Hint''' - if you are having difficulties deriving the gradients, you may wish to consult the page on [[deriving gradients using the backpropagation idea]].
'''Hint''' - if you are having difficulties deriving the gradients, you may wish to consult the page on [[deriving gradients using the backpropagation idea]].
-
=== Step 4: Optimization ===
+
==== Step 4: Optimization ====
-
This step is broken down into two substeps. In each substep, you will (separately) optimize for one of the two ICA objective functions.
+
In step 4, you will optimize for the orthonormal ICA objective using gradient descent with backtracking line search (the code for which has already been provided for you. For more details on the backtracking line search, you may wish to consult the [[Exercise:Independent Component Analysis#Appendix| appendix ]] of this exercise). The orthonormality constraint should be enforced with a projection, which you should fill in.
-
==== Step 4a: Orthonormal ICA ====
+
Once you have filled in the code for the projection, check that it is correct by using the verification code provided. Once you have verified that your projection is correct, comment out the verification code and run the optimization. 1000 iterations of gradient descent should take less than 15 minutes, and produce a basis which looks like the following:
-
 
+
-
In step 4a, you will optimize for the orthonormal ICA objective using gradient descent with backtracking line search (the code for which has already been provided for you. For more details on the backtracking line search, you may wish to consult the [[Exercise:Independent Component Analysis#Appendix| appendix ]] of this exercise). The orthonormality constraint should be enforced with a projection, which you should fill in.
+
-
 
+
-
Once you have filled in the code for the projection, check that it is correct by using the verification code provided. Once you have verified that your projection is correct, comment out the verification code and run the optimization. 10 000 iterations of gradient descent should take around 2 hours, and produce a basis which looks like the following:
+
[[File:OrthonormalICAFeatures.png | 350px]]
[[File:OrthonormalICAFeatures.png | 350px]]
-
Observe that few of the bases have been completely learned even after 10 000 iterations, highlighting a weakness of orthonormal ICA - it is difficult to optimize for the objective while enforcing the orthonormality constraint using gradient descent, and convergence can be very slow. Hence, in situations where an orthonormal basis is not required, reconstruction ICA or other faster methods of learning bases (such as [[Sparse Coding: Autoencoder Interpretation | sparse coding]]) may be preferable.
+
It is comparatively difficult to optimize for the objective while enforcing the orthonormality constraint using gradient descent, and convergence can be slow. Hence, in situations where an orthonormal basis is not required, other faster methods of learning bases (such as [[Sparse Coding: Autoencoder Interpretation | sparse coding]]) may be preferable.
-
 
+
-
==== Step 4b: Reconstruction ICA ====
+
-
 
+
-
In step 4b, you will optimize for the reconstruction ICA objective using <tt>minFunc</tt>. Code has already been provided to do so, so all that is left is for you to run the code. 600 iterations of <tt>minFunc</tt> should take 20-25 minutes, and produce a basis which looks like the following:
+
-
 
+
-
[[File:ReconstructionICAFeatures.png | 350px]]
+
=== Appendix ===
=== Appendix ===
Line 67: Line 54:
The backtracking line search used in the exercise is based off that in [http://www.stanford.edu/~boyd/cvxbook/ Convex Optimization by Boyd and Vandenbergh]. In the backtracking line search, given a descent direction <math>\vec{u}</math> (in this exercise we use <math>\vec{u} = -\nabla f(\vec{x})</math>), we want to find a good step size <math>t</math> that gives us a steep descent. The general idea is to use a linear approximation (the first order Taylor approximation) to the function <math>f</math> at the current point <math>\vec{x}</math>, and to search for a step size <math>t</math> such that we can decrease the function's value by more than <math>\alpha</math> times the decrease predicted by the linear approximation (<math>\alpha \in (0, 0.5)</math>. For more details, you may wish to consult [http://www.stanford.edu/~boyd/cvxbook/ the book].
The backtracking line search used in the exercise is based off that in [http://www.stanford.edu/~boyd/cvxbook/ Convex Optimization by Boyd and Vandenbergh]. In the backtracking line search, given a descent direction <math>\vec{u}</math> (in this exercise we use <math>\vec{u} = -\nabla f(\vec{x})</math>), we want to find a good step size <math>t</math> that gives us a steep descent. The general idea is to use a linear approximation (the first order Taylor approximation) to the function <math>f</math> at the current point <math>\vec{x}</math>, and to search for a step size <math>t</math> such that we can decrease the function's value by more than <math>\alpha</math> times the decrease predicted by the linear approximation (<math>\alpha \in (0, 0.5)</math>. For more details, you may wish to consult [http://www.stanford.edu/~boyd/cvxbook/ the book].
 +
 +
However, it is not necessary to use the backtracking line search here. Gradient descent with a small step size, or backtracking to a step size so that the objective decreases is sufficient for this exercise.

Latest revision as of 04:31, 4 October 2011

Personal tools