独立成分分析

From Ufldl

Jump to: navigation, search
Line 9: Line 9:
'''原文''':
'''原文''':
-
If you recall, in sparse coding, we wanted to learn an over-complete basis for the data. In particular, this implies that the basis vectors that we learn in sparse coding will not be linearly independent. While this may be desirable in certain situations, sometimes we want to learn a linearly independent basis for the data. In independent component analysis (ICA), this is exactly what we want to do. Further, in ICA, we want to learn not just any linearly independent basis, but an orthonormal basis for the data. (An orthonormal basis is a basis (ϕ_1,…ϕ_n) such that   if   and 1 if i = j).
+
If you recall, in [[Sparse Coding | sparse coding]], we wanted to learn an '''over-complete''' basis for the data. In particular, this implies that the basis vectors that we learn in sparse coding will not be linearly independent. While this may be desirable in certain situations, sometimes we want to learn a linearly independent basis for the data. In independent component analysis (ICA), this is exactly what we want to do. Further, in ICA, we want to learn not just any linearly independent basis, but an '''orthonormal''' basis for the data. (An orthonormal basis is a basis <math>(\phi_1, \ldots \phi_n)</math> such that <math>\phi_i \cdot \phi_j = 0</math> if <math>i \ne j</math> and <math>1</math> if <math>i = j</math>).
'''译文''':
'''译文''':
-
 
如果你还记得,在稀疏编码中我们希望为数据学习一个过完备基(over-complete basis)。具体来说,这意味着我们在稀疏编码中学习的基向量不一定是线性独立的。虽然在某些情况下这是可以的,但有时我们希望学习一个线性独立基。这正是我们在独立成份分析(ICA)中要做的。而且,在ICA中,我们希望学习的不仅是线性独立基,而且是标准正交基。(一个标准正交基是一个基(ϕ_1,…ϕ_n),满足ϕ_i∙ϕ_j={█(0  if i≠j@1 if i=j)┤)。
如果你还记得,在稀疏编码中我们希望为数据学习一个过完备基(over-complete basis)。具体来说,这意味着我们在稀疏编码中学习的基向量不一定是线性独立的。虽然在某些情况下这是可以的,但有时我们希望学习一个线性独立基。这正是我们在独立成份分析(ICA)中要做的。而且,在ICA中,我们希望学习的不仅是线性独立基,而且是标准正交基。(一个标准正交基是一个基(ϕ_1,…ϕ_n),满足ϕ_i∙ϕ_j={█(0  if i≠j@1 if i=j)┤)。
'''一审''':
'''一审''':
-
 
如果你还记得,在稀疏编码中我们希望为数据学习一个过完备基(over-complete basis)。具体来说,这意味着在稀疏编码中学习到的基向量之间不一定线性独立。尽管在某些情况下这已经满足需要,但有时我们仍然希望得到一组线性独立基。例如在独立成份分析(ICA)中,这正是我们想要的。而且,在ICA中,我们希望学习到的基不仅要线性独立,而且还是一组标准正交基。(一个标准正交基 (ϕ_1,…ϕ_n)需要满足条件:ϕ_i∙ϕ_j={█(0  if i≠j@1 if i=j)┤)。
如果你还记得,在稀疏编码中我们希望为数据学习一个过完备基(over-complete basis)。具体来说,这意味着在稀疏编码中学习到的基向量之间不一定线性独立。尽管在某些情况下这已经满足需要,但有时我们仍然希望得到一组线性独立基。例如在独立成份分析(ICA)中,这正是我们想要的。而且,在ICA中,我们希望学习到的基不仅要线性独立,而且还是一组标准正交基。(一个标准正交基 (ϕ_1,…ϕ_n)需要满足条件:ϕ_i∙ϕ_j={█(0  if i≠j@1 if i=j)┤)。
'''原文''':
'''原文''':
-
Like sparse coding, independent component analysis has a simple mathematical formulation. Given some data x, we would like to learn a set of basis vectors which we represent in the columns of a matrix W, such that, firstly, as in sparse coding, our features are sparse; and secondly, our basis is an orthonormal basis. (Note that while in sparse coding, our matrix A was for mapping features s to raw data, in independent component analysis, our matrix W works in the opposite direction, mapping raw data x to features instead). This gives us the following objective function:
+
 
 +
Like sparse coding, independent component analysis has a simple mathematical formulation. Given some data <math>x</math>, we would like to learn a set of basis vectors which we represent in the columns of a matrix <math>W</math>, such that, firstly, as in sparse coding, our features are '''sparse'''; and secondly, our basis is an '''orthonormal''' basis. (Note that while in sparse coding, our matrix <math>A</math> was for mapping '''features''' <math>s</math> to '''raw data''', in independent component analysis, our matrix <math>W</math> works in the opposite direction, mapping '''raw data''' <math>x</math> to '''features''' instead). This gives us the following objective function:
 +
 
 +
:<math>
 +
J(W) = \lVert Wx \rVert_1
 +
</math>
 +
 
   
   
'''译文''':
'''译文''':
Line 27: Line 31:
'''一审''':
'''一审''':
 +
和稀疏编码类似,独立成分分析也有一个简单的数学形式。给定数据x,我们希望学习到一组基向量,以列向量的形式构成矩阵W,其满足以下特点:首先,和稀疏编码一样,特征是稀疏的;其次,基是标准正交的(注意,在稀疏编码中,矩阵A用于将特征s映射到原始数据,而在独立成分分析中,矩阵W工作的方向相反,是将原始数据x映射到特征)。这样我们得到以下目标函数:
和稀疏编码类似,独立成分分析也有一个简单的数学形式。给定数据x,我们希望学习到一组基向量,以列向量的形式构成矩阵W,其满足以下特点:首先,和稀疏编码一样,特征是稀疏的;其次,基是标准正交的(注意,在稀疏编码中,矩阵A用于将特征s映射到原始数据,而在独立成分分析中,矩阵W工作的方向相反,是将原始数据x映射到特征)。这样我们得到以下目标函数:
J(W)=‖Wx‖_1
J(W)=‖Wx‖_1
Line 33: Line 38:
'''原文''':
'''原文''':
-
This objective function is equivalent to the sparsity penalty on the features s in sparse coding, since Wx is precisely the features that represent the data. Adding in the orthonormality constraint gives us the full optimization problem for independent component analysis:
+
This objective function is equivalent to the sparsity penalty on the features <math>s</math> in sparse coding, since <math>Wx</math> is precisely the features that represent the data. Adding in the orthonormality constraint gives us the full optimization problem for independent component analysis:
 +
 
 +
:<math>
 +
\begin{array}{rcl}
 +
    {\rm minimize} & \lVert Wx \rVert_1  \\
 +
    {\rm s.t.}    & WW^T = I \\
 +
\end{array}
 +
</math>
   
   
'''译文''':
'''译文''':
 +
这个目标函数等价于在稀疏编码中施加在特征s上的稀疏性惩罚,因为Wx正是表达数据的特征。我们加入标准正交约束来求解独立成分分析的全优化问题:
这个目标函数等价于在稀疏编码中施加在特征s上的稀疏性惩罚,因为Wx正是表达数据的特征。我们加入标准正交约束来求解独立成分分析的全优化问题:
minimize ‖Wx‖_1 s.t. WW^T=1
minimize ‖Wx‖_1 s.t. WW^T=1
'''一审''':
'''一审''':
 +
由于Wx实际上是描述数据的特征,这个目标函数等价于在稀疏编码中在特征s上加上稀疏惩罚。加入标准正交性约束后,独立成分分析相当于求解如下优化问题:
由于Wx实际上是描述数据的特征,这个目标函数等价于在稀疏编码中在特征s上加上稀疏惩罚。加入标准正交性约束后,独立成分分析相当于求解如下优化问题:
minimize ‖Wx‖_1
minimize ‖Wx‖_1
Line 47: Line 61:
'''原文''':
'''原文''':
-
As is usually the case in deep learning, this problem has no simple analytic solution, and to make matters worse, the orthonormality constraint makes it slightly more difficult to optimize for the objective using gradient descent - every iteration of gradient descent must be followed by a step that maps the new basis back to the space of orthonormal bases (hence enforcing the constraint).
+
 
 +
As is usually the case in deep learning, this problem has no simple analytic solution, and to make matters worse, the orthonormality constraint makes it slightly more difficult to optimize for the objective using gradient descent - every iteration of gradient descent must be followed by a step that maps the new basis back to the space of orthonormal bases (hence enforcing the constraint).  
'''译文''':
'''译文''':
 +
与在deep learning中的通常情况一样,这个问题没有简单的解析解,而且情况变得更糟,正交规范化约束使得难以用梯度下降来优化求解目标——每次梯度下降迭代必须跟着一步将新的基映射回正交规范基空间中(因此来强制约束)。
与在deep learning中的通常情况一样,这个问题没有简单的解析解,而且情况变得更糟,正交规范化约束使得难以用梯度下降来优化求解目标——每次梯度下降迭代必须跟着一步将新的基映射回正交规范基空间中(因此来强制约束)。
'''一审''':
'''一审''':
 +
与在deep learning中的通常情况一样,这个问题没有简单的解析解。而且由于标准正交性约束,使得用梯度下降方法来求解该问题变得更加困难,因为每一轮迭代中,在梯度下降迭代之后,必须将新的基映射回正交基空间中(以此保证正交性约束)。
与在deep learning中的通常情况一样,这个问题没有简单的解析解。而且由于标准正交性约束,使得用梯度下降方法来求解该问题变得更加困难,因为每一轮迭代中,在梯度下降迭代之后,必须将新的基映射回正交基空间中(以此保证正交性约束)。
'''原文''':
'''原文''':
-
In practice, optimizing for the objective function while enforcing the orthonormality constraint (as described in Orthonormal ICA section below) is feasible but slow. Hence, the use of orthonormal ICA is limited to situations where it is important to obtain an orthonormal basis (TODO: what situations) .
+
 
 +
In practice, optimizing for the objective function while enforcing the orthonormality constraint (as described in [[Independent Component Analysis#Orthonormal ICA | Orthonormal ICA]] section below) is feasible but slow. Hence, the use of orthonormal ICA is limited to situations where it is important to obtain an orthonormal basis ([[TODO]]: what situations) .
'''译文''':
'''译文''':
 +
在实践中,对目标函数优化的同时施加标准正交约束(正如在Orthonormal ICA 中描述的)是可行的,但是速度慢。因此,标准正交ICA的使用受限于必须得到一个标准正交基的条件。(TODO:  什么条件)
在实践中,对目标函数优化的同时施加标准正交约束(正如在Orthonormal ICA 中描述的)是可行的,但是速度慢。因此,标准正交ICA的使用受限于必须得到一个标准正交基的条件。(TODO:  什么条件)
'''一审''':
'''一审''':
 +
在实践中,在最优化目标函数的同时施加正交性约束(在下一节Orthonormal ICA 中描述的)是可行的,但是速度慢。由于获取正交基的重要性,标准正交ICA的使用受限于一些条件。(TODO:  什么条件)
在实践中,在最优化目标函数的同时施加正交性约束(在下一节Orthonormal ICA 中描述的)是可行的,但是速度慢。由于获取正交基的重要性,标准正交ICA的使用受限于一些条件。(TODO:  什么条件)
== Orthonormal ICA 标准正交ICA ==
== Orthonormal ICA 标准正交ICA ==
 +
'''原文''':
'''原文''':
 +
The orthonormal ICA objective is:
The orthonormal ICA objective is:
-
   
+
:<math>
-
Observe that the constraint WWT = I implies two other constraints.
+
\begin{array}{rcl}
 +
    {\rm minimize} & \lVert Wx \rVert_1 \\
 +
    {\rm s.t.}    & WW^T = I \\
 +
\end{array}
 +
</math>
 +
 
 +
Observe that the constraint <math>WW^T = I</math> implies two other constraints.  
'''译文''':
'''译文''':
 +
标准正交ICA的目标函数是:
标准正交ICA的目标函数是:
minimize ‖Wx‖_1 s.t. WW^T=1
minimize ‖Wx‖_1 s.t. WW^T=1
Line 76: Line 105:
'''一审''':
'''一审''':
 +
标准正交ICA的目标函数是:
标准正交ICA的目标函数是:
minimize ‖Wx‖_1 s.t. WW^T=I
minimize ‖Wx‖_1 s.t. WW^T=I
Line 81: Line 111:
'''原文''':
'''原文''':
-
Firstly, since we are learning an orthonormal basis, the number of basis vectors we learn must be less than the dimension of the input. In particular, this means that we cannot learn over-complete bases as we usually do in sparse coding.
+
 
-
Secondly, the data must be ZCA whitened with no regularization (that is, with ε set to 0). (TODO Why must this be so?)
+
Firstly, since we are learning an orthonormal basis, the number of basis vectors we learn must be less than the dimension of the input. In particular, this means that we cannot learn over-complete bases as we usually do in [[Sparse Coding: Autoencoder Interpretation | sparse coding]].  
-
Hence, before we even begin to optimize for the orthonormal ICA objective, we must ensure that our data has been whitened, and that we are learning an under-complete basis.
+
 
 +
Secondly, the data must be [[Whitening | ZCA whitened]] with no regularization (that is, with <math>\epsilon</math> set to 0). ([[TODO]] Why must this be so?)
 +
 
 +
Hence, before we even begin to optimize for the orthonormal ICA objective, we must ensure that our data has been '''whitened''', and that we are learning an '''under-complete''' basis.
'''译文''':
'''译文''':
 +
首先,因为我们正在学习一个标准正交基,所以我们学习的基向量的个数必须小于输入的维度。在实践中,这意味着不能象我们通常在稀疏编码中一样来学习过完备基。
首先,因为我们正在学习一个标准正交基,所以我们学习的基向量的个数必须小于输入的维度。在实践中,这意味着不能象我们通常在稀疏编码中一样来学习过完备基。
第二,数据必须是无正则项的ZCA白化(即,ε设置为0)。(TODO为什么必须这样做?)
第二,数据必须是无正则项的ZCA白化(即,ε设置为0)。(TODO为什么必须这样做?)
Line 91: Line 125:
'''一审''':
'''一审''':
 +
第一,因为要学习到一组标准正交基,所以基向量的个数必须小于输入数据的维度。在实践中,这意味着不能像通常在稀疏编码中所做的那样来学习过完备基(over-complete bases)。
第一,因为要学习到一组标准正交基,所以基向量的个数必须小于输入数据的维度。在实践中,这意味着不能像通常在稀疏编码中所做的那样来学习过完备基(over-complete bases)。
第二,数据必须经过无正则项ZCA白化(也即ε设为0)。(TODO为什么必须这样做?)
第二,数据必须经过无正则项ZCA白化(也即ε设为0)。(TODO为什么必须这样做?)
Line 96: Line 131:
'''原文''':
'''原文''':
 +
Following that, to optimize for the objective, we can use gradient descent, interspersing gradient descent steps with projection steps to enforce the orthonormality constraint. Hence, the procedure will be as follows:
Following that, to optimize for the objective, we can use gradient descent, interspersing gradient descent steps with projection steps to enforce the orthonormality constraint. Hence, the procedure will be as follows:
 +
Repeat until done:
Repeat until done:
-
where U is the space of matrices satisfying WWT = I
+
<ol>
-
In practice, the learning rate α is varied using a line-search algorithm to speed up the descent, and the projection step is achieved by setting , which can actually be seen as ZCA whitening (TODO explain how it is like ZCA whitening).
+
<li><math>W \leftarrow W - \alpha \nabla_W \lVert Wx \rVert_1</math>
 +
<li><math>W \leftarrow \operatorname{proj}_U W</math> where <math>U</math> is the space of matrices satisfying <math>WW^T = I</math>
 +
</ol>
 +
 
 +
In practice, the learning rate <math>\alpha</math> is varied using a line-search algorithm to speed up the descent, and the projection step is achieved by setting <math>W \leftarrow (WW^T)^{-\frac{1}{2}} W</math>, which can actually be seen as ZCA whitening ([[TODO]] explain how it is like ZCA whitening).
'''译文''':
'''译文''':
 +
然后,为了优化目标,我们可以使用梯度下降法,在梯度下降的每一步中增加投影步骤,以满足标准正交约束。因此,这个过程如下:
然后,为了优化目标,我们可以使用梯度下降法,在梯度下降的每一步中增加投影步骤,以满足标准正交约束。因此,这个过程如下:
重复直到完成:
重复直到完成:
Line 108: Line 150:
'''一审''':
'''一审''':
 +
然后,为了优化目标,我们可以使用梯度下降法,在梯度下降的每一步中增加投影步骤,以满足标准正交约束。过程如下:
然后,为了优化目标,我们可以使用梯度下降法,在梯度下降的每一步中增加投影步骤,以满足标准正交约束。过程如下:
重复直到完成:
重复直到完成:
Line 114: Line 157:
== Topographic ICA 拓扑ICA ==
== Topographic ICA 拓扑ICA ==
 +
'''原文''':
'''原文''':
-
Just like sparse coding, independent component analysis can be modified to give a topographic variant by adding a topographic cost term.
+
 
 +
Just like [[Sparse Coding: Autoencoder Interpretation | sparse coding]], independent component analysis can be modified to give a topographic variant by adding a topographic cost term.
'''译文''':
'''译文''':
 +
就像稀疏编码,独立成分分析可以被修改成一个拓扑变量加上一个拓扑代价项。
就像稀疏编码,独立成分分析可以被修改成一个拓扑变量加上一个拓扑代价项。
'''一审''':
'''一审''':
 +
和稀疏编码类似,独立成分分析可以修改为在一个拓扑变量加上一个拓扑代价项。
和稀疏编码类似,独立成分分析可以修改为在一个拓扑变量加上一个拓扑代价项。

Revision as of 03:28, 9 March 2013

Personal tools