# 稀疏编码自编码表达

 Revision as of 07:38, 8 March 2013 (view source)Kandeng (Talk | contribs) (→Topographic sparse coding)← Older edit Revision as of 08:05, 8 March 2013 (view source)Kandeng (Talk | contribs) (→Sparse coding)Newer edit → Line 1: Line 1: + [原文] [原文] [原文] == Sparse coding == == Sparse coding == Line 6: Line 7: [初译] [初译] + 稀疏编码 + + 在稀疏自编码中，为了用稀疏特征$\sigma(Wx + b)$重新表示输入数据$x$需要学习权重系数$W$（以及对应的偏移量$b$）。 [一审] [一审] - [原文] + 稀疏编码 + + 在稀疏自编码中，为了用稀疏特征$\sigma(Wx + b)$重新表示输入数据$x$需要学习权重系数$W$（以及对应的偏移量$b$）。 [[File:STL_SparseAE.png | 240px]] [[File:STL_SparseAE.png | 240px]] Line 19: Line 25: [初译] [初译] + 稀疏编码可以看作是稀疏自编码方法的一个变形，该方法试图直接学习数据的特征集。利用偏移量将待学习特征集从特征空间转化到数据空间，实现了待学习特征集数据的重新表示。 [一审] [一审] + + 稀疏编码可以看作是稀疏自编码方法的一个变形，该方法试图直接学习数据的特征集。利用连带基将学习特征集从特征空间转化到数据空间，就可以从学到的特征中重建数据。 [原文] [原文] Line 28: Line 37: [初译] [初译] + 在稀疏编码中，通常有很多数据$x$供我们进行特征学习。例如：$s$是一个用于表示数据的稀疏特征集，$A$是特征集从特征空间转换到数据空间的基。因此，为了计算s和A构建如下目标函数： [一审] [一审] + + 在稀疏编码中，对于从数据$x$中进行特征学习的情况。例如学习一个用于表示数据的稀疏特征集math>s[/itex],和一个将特征从特征空间转换到数据空间的基$A$,我们可以构建如下目标函数： [原文] [原文] Line 41: Line 53: [初译] [初译] + （ $\lVert x \rVert_k$等价于$\left( \sum{ \left| x_i^k \right| } \right) ^{\frac{1}{k}}$是$x$的L$k$范数。L2范数即大家熟知的欧几里得范数，L1范数是向量元素的绝对值之和） [一审] [一审] - [原文] + （ $\lVert x \rVert_k$等价于$\left( \sum{ \left| x_i^k \right| } \right) ^{\frac{1}{k}}$是$x$的L$k$范数。L2范数即大家熟知的欧几里得范数，L1范数是向量元素的绝对值之和） + [原文] The first term is the error in reconstructing the data from the features using the basis, and the second term is a sparsity penalty term to encourage the learned features to be sparse. The first term is the error in reconstructing the data from the features using the basis, and the second term is a sparsity penalty term to encourage the learned features to be sparse. Line 51: Line 65: [初译] [初译] + 上式前半部分为重建误差，后半部分为稀疏性惩罚项（sparsity penalty term）用于保证向量集的稀疏性。 [一审] [一审] - [原文] + 上式前半部分为重建误差，后半部分为稀疏性惩罚项（sparsity penalty term）用于保证向量集的稀疏性。 + [原文] However, the objective function as it stands is not properly constrained - it is possible to reduce the sparsity cost (the second term) by scaling $A$ by some constant and scaling $s$ by the inverse of the same constant, without changing the error. Hence, we include the additional constraint that that for every column $A_j$ of $A$, However, the objective function as it stands is not properly constrained - it is possible to reduce the sparsity cost (the second term) by scaling $A$ by some constant and scaling $s$ by the inverse of the same constant, without changing the error. Hence, we include the additional constraint that that for every column $A_j$ of $A$, Line 62: Line 78: [初译] [初译] + 但是，目标函数在不改变重建误差的前提下，可以通过常数倍缩放$A$同时以该常数倒数等比例缩放$s$降低稀疏代价（目标函数第二项）。因此，需要为$A$中每项$A_j$ 增加额外约束$A_j^TA_j \le 1$。问题变为： [一审] [一审] + + 但是，目标函数在不改变重建误差的前提下，可以通过常数倍扩大$A$同时以该常数倒数等比例缩放$s$降低稀疏代价（目标函数第二项）。因此，需要为$A$中每项$A_j$ 增加额外约束$A_j^TA_j \le 1$。问题变为： [原文] [原文] - :$:[itex] Line 82: Line 100: [初译] [初译] + 遗憾的是，因为目标函数并不是一个凸函数，所以不能用梯度方法解决这个优化问题。但是，在给定A通过最小化[itex]J(A, s)$求解s的问题是凸函数。同理，给定$s$通过最小化$J(A, s)$求解A的问题也是凸函数。这表明，可以通过交替固定$s$和$A$分别求解$A$和$s$。实践表明，这一策略取得的效果非常好。 [一审] [一审] + + 遗憾的是，因为目标函数并不是一个凸函数，所以不能用梯度方法解决这个优化问题。但是，在给定A通过最小化$J(A, s)$求解s的问题是凸问题。同理，给定$s$通过最小化$J(A, s)$求解A的问题也是凸问题。这表明，可以通过交替固定$s$和$A$分别求解$A$和$s$。实践表明，这一策略取得的效果非常好。 [原文] [原文] - - However, the form of our problem presents another difficulty - the constraint that $A_j^TA_j \le 1 \; \forall j$ cannot be enforced using simple gradient-based methods. Hence, in practice, this constraint is weakened to a "weight decay" term designed to keep the entries of $A$ small. This gives us a new objective function: However, the form of our problem presents another difficulty - the constraint that $A_j^TA_j \le 1 \; \forall j$ cannot be enforced using simple gradient-based methods. Hence, in practice, this constraint is weakened to a "weight decay" term designed to keep the entries of $A$ small. This gives us a new objective function: - [初译] [初译] + 但是，问题的表示形式带来了另一个难题：其约束条件$A_j^TA_j \le 1 \; \forall j$ 不能简单的应用到梯度方法中。因此在实际问题中，此约束条件被弱化为“权重衰变”（"weight decay"）项用于保证$A$的每一项值够小。至此得到一个新的目标函数： [一审] [一审] + + 但是，问题的表示形式带来了另一个难题：其约束条件$A_j^TA_j \le 1 \; \forall j$ 不能在简单的梯度方法中实施。因此在实际问题中，此约束条件被弱化为“权重衰变”（"weight decay"）项以保证$A$的每一项值够小。至此得到一个新的目标函数： [原文] [原文] - :$:[itex] Line 108: Line 128: [初译] [初译] + （注意上式中第三项， [itex]\lVert A \rVert_2^2$等价于$\sum_r{\sum_c{A_{rc}^2}}$，是A各项的平方和） [一审] [一审] + + （注意上式中第三项， $\lVert A \rVert_2^2$等价于$\sum_r{\sum_c{A_{rc}^2}}$，是A各项的平方和） [原文] [原文] - This objective function presents one last problem - the L1 norm is not differentiable at 0, and hence poses a problem for gradient-based methods. While the problem can be solved using other non-gradient descent-based methods, we will "smooth out" the L1 norm using an approximation which will allow us to use gradient descent. To "smooth out" the L1 norm, we use $\sqrt{x + \epsilon}$ in place of $\left| x \right|$, where $\epsilon$ is a "smoothing parameter" which can also be interpreted as a sort of "sparsity parameter" (to see this, observe that when $\epsilon$ is large compared to $x$, the $x + \epsilon$ is dominated by $\epsilon$, and taking the square root yields approximately $\sqrt{\epsilon}$). This "smoothing" will come in handy later when considering topographic sparse coding below. This objective function presents one last problem - the L1 norm is not differentiable at 0, and hence poses a problem for gradient-based methods. While the problem can be solved using other non-gradient descent-based methods, we will "smooth out" the L1 norm using an approximation which will allow us to use gradient descent. To "smooth out" the L1 norm, we use $\sqrt{x + \epsilon}$ in place of $\left| x \right|$, where $\epsilon$ is a "smoothing parameter" which can also be interpreted as a sort of "sparsity parameter" (to see this, observe that when $\epsilon$ is large compared to $x$, the $x + \epsilon$ is dominated by $\epsilon$, and taking the square root yields approximately $\sqrt{\epsilon}$). This "smoothing" will come in handy later when considering topographic sparse coding below. Line 120: Line 142: [初译] [初译] + 这一目标函数带来了最后一个问题，即L1范数在0点不可微影响了梯度方法的应用。尽管可以通过其他非梯度下降方法避开这一问题，但是本文通过使用近似值“平滑”L1范数的方法解决此难题。使用 $\sqrt{x + \epsilon}$ 代替 $\left| x \right|$对L1范数进行平滑，其中$\epsilon$是“平滑参数”（"smoothing parameter"）或者“稀疏参数”（"sparsity parameter"）（如果$\epsilon$远大于$x$，则$x + \epsilon$ 的值由$\epsilon$主导，其平方根近似于$\sqrt{\epsilon}$）。考虑拓扑稀疏编码时，“平滑”会派上用场。 + + 因此，最终的目标函数是： [一审] [一审] - [原文] + 这一目标函数表现出最后一个问题，即L1范数在0点不可微影响了梯度方法的应用。尽管可以通过其他非梯度下降方法避开这一问题，但是本文通过使用近似值“平滑”L1范数的方法解决此难题。使用 $\sqrt{x + \epsilon}$ 代替 $\left| x \right|$对L1范数进行平滑，其中$\epsilon$是“平滑参数”（"smoothing parameter"）或者“稀疏参数”（"sparsity parameter"）（如果$\epsilon$远大于$x$，则$x + \epsilon$ 的值由$\epsilon$主导，其平方根近似于$\sqrt{\epsilon}$）。后文可见考虑拓扑稀疏编码时，“平滑”会派上用场。 + 因此，最终的目标函数是： + + [原文] :$:[itex] Line 134: Line 162: [初译] [初译] + （ [itex]\sqrt{s^2 + \epsilon}$是 $\sum_k{\sqrt{s_k^2 + \epsilon}}$的简写） [一审] [一审] + + （其中 $\sqrt{s^2 + \epsilon}$是 $\sum_k{\sqrt{s_k^2 + \epsilon}}$的简写） [原文] [原文] - This objective function can then be optimized iteratively, using the following procedure: This objective function can then be optimized iteratively, using the following procedure: Line 152: Line 182: [初译] [初译] + 该目标函数可以通过以下过程迭代优化： +
+
1. 随机初始化$A$ +
2. 重复以下步骤直至收敛： +
+
1. 根据上一步给定的$A$，求解能够最小化$J(A, s)$的$s$ +
2. 根据上一步得到的$s$，求解能够最小化$J(A, s)$的$A$ +
+
[一审] [一审] - [原文] + 该目标函数可以通过以下过程迭代优化： +
+
1. 随机初始化$A$ +
2. 重复以下步骤直至收敛： +
+
1. 根据上一步给定的$A$，求解能够最小化$J(A, s)$的$s$ +
2. 根据上一步得到的$s$，求解能够最小化$J(A, s)$的$A$ +
+
+ + [原文] Observe that with our modified objective function, the objective function $J(A, s)$ given $s$, that is $J(A; s) = \lVert As - x \rVert_2^2 + \gamma \lVert A \rVert_2^2$ (the L1 term in $s$ can be omitted since it is not a function of $A$) is simply a quadratic term in $A$, and hence has an easily derivable analytic solution in $A$. A quick way to derive this solution would be to use matrix calculus - some pages about matrix calculus can be found in the [[Useful Links | useful links]] section. Unfortunately, the objective function given $A$ does not have a similarly nice analytic solution, so that minimization step will have to be carried out using gradient descent or similar optimization methods. Observe that with our modified objective function, the objective function $J(A, s)$ given $s$, that is $J(A; s) = \lVert As - x \rVert_2^2 + \gamma \lVert A \rVert_2^2$ (the L1 term in $s$ can be omitted since it is not a function of $A$) is simply a quadratic term in $A$, and hence has an easily derivable analytic solution in $A$. A quick way to derive this solution would be to use matrix calculus - some pages about matrix calculus can be found in the [[Useful Links | useful links]] section. Unfortunately, the objective function given $A$ does not have a similarly nice analytic solution, so that minimization step will have to be carried out using gradient descent or similar optimization methods. Line 166: Line 215: [原文] [原文] - In theory, optimizing for this objective function using the iterative method as above should (eventually) yield features (the basis vectors of $A$) similar to those learned using the sparse autoencoder. However, in practice, there are quite a few tricks required for better convergence of the algorithm, and these tricks are described in greater detail in the later section on [[ Sparse Coding: Autoencoder Interpretation#Sparse coding in practice | sparse coding in practice]]. Deriving the gradients for the objective function may be slightly tricky as well, and using matrix calculus or [[Deriving gradients using the backpropagation idea | using the backpropagation intuition]] can be helpful. In theory, optimizing for this objective function using the iterative method as above should (eventually) yield features (the basis vectors of $A$) similar to those learned using the sparse autoencoder. However, in practice, there are quite a few tricks required for better convergence of the algorithm, and these tricks are described in greater detail in the later section on [[ Sparse Coding: Autoencoder Interpretation#Sparse coding in practice | sparse coding in practice]]. Deriving the gradients for the objective function may be slightly tricky as well, and using matrix calculus or [[Deriving gradients using the backpropagation idea | using the backpropagation intuition]] can be helpful. Line 172: Line 220: [初译] [初译] + 理论上，通过上述迭代方法求解目标函数的最优化问题最终得到的特征集（基向量$A$）与通过稀疏编码学习得到的特征集类似。但是实际上，为了更好的获得算法收敛性需要使用一些小技巧，后面的稀疏编码实践章节会详细介绍这些技巧。用梯度下降方法求解目标函数略有技巧，此外使用矩阵演算以及反向传播机制也都有助于最优化问题的解决。 [一审] [一审] + + 理论上，通过上述迭代方法求解目标函数的最优化问题最终得到的特征集（$A$的基向量）与通过稀疏自编码学习得到的特征集是相似的。但是实际上，为了更好的获得算法收敛性需要使用一些小技巧，后面的稀疏编码实践章节会详细介绍这些技巧。用梯度下降方法求解目标函数略有技巧，此外使用矩阵演算以及反向传播机制也都有助于最优化问题的解决。 [原文] [原文]