Sparse Coding: Autoencoder Interpretation
From Ufldl
for
Sparse Coding: Autoencoder Interpretation
Jump to:
navigation
,
search
== Sparse coding == In the sparse autoencoder, we tried to learn a set of weights <math>W</math> (and associated biases <math>b</math>) that would give us sparse features <math>\sigma(Wx + b)</math> useful in reconstructing the input <math>x</math>. [[File:STL_SparseAE.png | 240px]] Sparse coding can be seen as a modification of the sparse autoencoder method in which we try to learn the set of features for some data "directly". Together with an associated basis for transforming the learned features from the feature space to the data space, we can then reconstruct the data from the learned features. Formally, in sparse coding, we have some data <math>x</math> we would like to learn features on. In particular, we would like to learn <math>s</math>, a set of sparse features useful for representing the data, and <math>A</math>, a basis for transforming the features from the feature space to the data space. Our objective function is hence: :<math> J(A, s) = \lVert As - x \rVert_2^2 + \lambda \lVert s \rVert_1 </math> (If you are unfamiliar with the notation, <math>\lVert x \rVert_k</math> refers to the L<math>k</math> norm of the <math>x</math> which is equal to <math>\left( \sum{ \left| x_i^k \right| } \right) ^{\frac{1}{k}}</math>. The L2 norm is the familiar Euclidean norm, while the L1 norm is the sum of absolute values of the elements of the vector) The first term is the error in reconstructing the data from the features using the basis, and the second term is a sparsity penalty term to encourage the learned features to be sparse. However, the objective function as it stands is not properly constrained - it is possible to reduce the sparsity cost (the second term) by scaling <math>A</math> by some constant and scaling <math>s</math> by the inverse of the same constant, without changing the error. Hence, we include the additional constraint that that for every column <math>A_j</math> of <math>A</math>, <math>A_j^TA_j \le 1</math>. Our problem is thus: :<math> \begin{array}{rcl} {\rm minimize} & \lVert As - x \rVert_2^2 + \lambda \lVert s \rVert_1 \\ {\rm s.t.} & A_j^TA_j \le 1 \; \forall j \\ \end{array} </math> Unfortunately, the objective function is non-convex, and hence impossible to optimize well using gradient-based methods. However, given <math>A</math>, the problem of finding <math>s</math> that minimizes <math>J(A, s)</math> is convex. Similarly, given <math>s</math>, the problem of finding <math>A</math> that minimizes <math>J(A, s)</math> is also convex. This suggests that we might try alternately optimizing for <math>A</math> for a fixed <math>s</math>, and then optimizing for <math>s</math> given a fixed <math>A</math>. It turns out that this works quite well in practice. However, the form of our problem presents another difficulty - the constraint that <math>A_j^TA_j \le 1 \; \forall j</math> cannot be enforced using simple gradient-based methods. Hence, in practice, this constraint is weakened to a "weight decay" term designed to keep the entries of <math>A</math> small. This gives us a new objective function: :<math> J(A, s) = \lVert As - x \rVert_2^2 + \lambda \lVert s \rVert_1 + \gamma \lVert A \rVert_2^2 </math> (note that the third term, <math>\lVert A \rVert_2^2</math> is the sum of squares of the entries of A, or <math>\sum_r{\sum_c{A_{rc}^2}}</math>) This objective function presents one last problem - the L1 norm is not differentiable at 0, and hence poses a problem for gradient-based methods. While the problem can be solved using other non-gradient descent-based methods, we will "smooth out" the L1 norm using an approximation which will allow us to use gradient descent. To "smooth out" the L1 norm, we use <math>\sqrt{x + \epsilon}</math> in place of <math>\left| x \right|</math>, where <math>\epsilon</math> is a "smoothing parameter" which can also be interpreted as a sort of "sparsity parameter" (to see this, observe that when <math>\epsilon</math> is large compared to <math>x</math>, the <math>x + \epsilon</math> is dominated by <math>\epsilon</math>, and taking the square root yields approximately <math>\sqrt{\epsilon}</math>). This "smoothing" will come in handy later when considering topographic sparse coding below. Our final objective function is hence: :<math> J(A, s) = \lVert As - x \rVert_2^2 + \lambda \sqrt{s^2 + \epsilon} + \gamma \lVert A \rVert_2^2 </math> (where <math>\sqrt{s^2 + \epsilon}</math> is shorthand for <math>\sum_k{\sqrt{s_k^2 + \epsilon}}</math>) This objective function can then be optimized iteratively, using the following procedure: <ol> <li>Initialize <math>A</math> randomly <li>Repeat until convergence <ol> <li>Find <math>s</math> that minimizes <math>J(A, s)</math> for the <math>A</math> found in the previous step <li>Find <math>A</math> that minimizes <math>J(A, s)</math> for the <math>s</math> found in the previous step </ol> </ol> Observe that with our modified objective function, the objective function <math>J(A, s)</math> given <math>s</math>, that is <math>J(A; s) = \lVert As - x \rVert_2^2 + \gamma \lVert A \rVert_2^2</math> (the L1 term in <math>s</math> can be omitted since it is not a function of <math>A</math>) is simply a quadratic term in <math>A</math>, and hence has an easily derivable analytic solution in <math>A</math>. A quick way to derive this solution would be to use matrix calculus - some pages about matrix calculus can be found in the [[Useful Links | useful links]] section. Unfortunately, the objective function given <math>A</math> does not have a similarly nice analytic solution, so that minimization step will have to be carried out using gradient descent or similar optimization methods. In theory, optimizing for this objective function using the iterative method as above should (eventually) yield features (the basis vectors of <math>A</math>) similar to those learned using the sparse autoencoder. However, in practice, there are quite a few tricks required for better convergence of the algorithm, and these tricks are described in greater detail in [[Exercise:Sparse Coding | the sparse coding exercise]]. Deriving the gradients for the objective function may be slightly tricky as well, and using matrix calculus or [[Deriving gradients using backpropagation | using the backpropagation intuition)]] can be helpful. == Topographic sparse coding == With sparse coding, we can learn a set of features useful for representing the data. However, drawing inspiration from the brain, we would like to learn a set of features that are "orderly" in some manner. For instance, consider visual features. As suggested earlier, the V1 cortex of the brain contains neurons which detect edges at particular orientations. However, these neurons are also organized into hypercolumns in which adjacent neurons detect edges at similar orientations. One neuron could detect a horizontal edge, its neighbors edges oriented slightly off the horizontal, and moving further along the hypercolumn, the neurons detect edges oriented further off the horizontal. Inspired by this example, we would like to learn features which are similarly "topographically ordered". What does this imply for our learned features? Intuitively, if "adjacent" features are "similar", we would expect that if one feature is activated, its neighbors will also be activated to a lesser extent. Concretely, suppose we (arbitrarily) organized our features into a square matrix. We would then like adjacent features in the matrix to similar. The way this is accomplished is to group these adjacent features together in the smoothed L1 penalty, so that instead of say <math>\sqrt{s_{1,1}^2 + \epsilon}</math>, we use say <math>\sqrt{s_{1,1}^2 + s_{1,2}^2 + s_{1,3}^2 + s_{2,1}^2 + s_{2,2}^2 + s_{3,2}^2 + s_{3,1}^2 + s_{3,2}^2 + s_{3,3}^2 + \epsilon}</math> instead, if we group in 3x3 regions. The grouping is usually overlapping, so that the 3x3 region starting at the 1st row and 1st column is one group, the 3x3 region starting at the 1st row and 2nd column is another group, and so on. Further, the grouping is also usually done wrapping around, as if the matrix were a torus, so that every feature is counted an equal number of times. Hence, in place of the smoothed L1 penalty, we use the sum of smoothed L1 penalties over all the groups, so our new objective function is: :<math> J(A, s) = \lVert As - x \rVert_2^2 + \lambda \sum_{\text{all groups } g}{\sqrt{ \left( \sum_{\text{all } s \in g}{s^2} \right) + \epsilon} } + \gamma \lVert A \rVert_2^2 </math> In practice, the "grouping" can be accomplished using a "grouping matrix" <math>V</math>, such that the <math>r</math>th row of <math>V</math> indicates which features are grouped in the <math>r</math>th group, so <math>V_{r, c} = 1</math> if group <math>r</math> contains feature <math>c</math>. Thinking of the grouping as being achieved by a grouping matrix makes the computation of the gradients more intuitive. Using this grouping matrix, the objective function can be rewritten as: :<math> J(A, s) = \lVert As - x \rVert_2^2 + \lambda \sum{ \sqrt{Vss^T + \epsilon} } + \gamma \lVert A \rVert_2^2 </math> (where <math>\sum{ \sqrt{Vss^T + \epsilon} }</math> is <math>\sum_r{ \sum_c { D_{r, c} } } </math> if we let <math>D = \sqrt{Vss^T + \epsilon}</math>) This objective function can be optimized using the iterated method described in the earlier section. Topographic sparse coding will learn features similar to those learned by sparse coding, except that the features will now be "ordered" in some way.
Template:Languages
(
view source
)
Return to
Sparse Coding: Autoencoder Interpretation
.
Views
Page
Discussion
View source
History
Personal tools
Log in
ufldl resources
UFLDL Tutorial
Recommended Readings
wiki
Main page
Recent changes
Random page
Help
Search
Toolbox
What links here
Related changes
Special pages