可视化自编码器训练结果

From Ufldl

Jump to: navigation, search
Line 1: Line 1:
-
可视化自编码器训练结果
 
:【原文】:
:【原文】:
-
Having trained a (sparse) autoencoder, we would now like to visualize the function
+
:Having trained a (sparse) autoencoder, we would now like to visualize the function
learned by the algorithm, to try to understand what it has learned.
learned by the algorithm, to try to understand what it has learned.
Consider the case of training an autoencoder on <math>\textstyle 10 \times 10</math> images, so that <math>\textstyle n = 100</math>.
Consider the case of training an autoencoder on <math>\textstyle 10 \times 10</math> images, so that <math>\textstyle n = 100</math>.
Each hidden unit <math>\textstyle i</math> computes a function of the input:
Each hidden unit <math>\textstyle i</math> computes a function of the input:
-
【初译】:
+
:【初译】:
得到了训练好的(稀疏)自编码器,我们就可以将通过算法习得的函数进行可视化,以便于了解学习的结果。我们以使用10×10的图像来训练自编码器为例,此时n=100。针对每个隐藏单元i,将输入值代入以下方程:
得到了训练好的(稀疏)自编码器,我们就可以将通过算法习得的函数进行可视化,以便于了解学习的结果。我们以使用10×10的图像来训练自编码器为例,此时n=100。针对每个隐藏单元i,将输入值代入以下方程:
-
【一校】:
+
:【一校】:
在得到了已经训练好的(稀疏)自编码器之后,我们希望可以将通过学习算法得到的函数进行可视化处理,以便于了解学习的结果。对于可视化过程,我们以一个通过对10×10的图像进行训练而得到的自编码器为例来进行说明,此例中n=100。在该自编码器中,每个隐藏单元i将输入代入到以下函数进行计算:
在得到了已经训练好的(稀疏)自编码器之后,我们希望可以将通过学习算法得到的函数进行可视化处理,以便于了解学习的结果。对于可视化过程,我们以一个通过对10×10的图像进行训练而得到的自编码器为例来进行说明,此例中n=100。在该自编码器中,每个隐藏单元i将输入代入到以下函数进行计算:
-
【二校】:
+
:【二校】:
我们得到训练好的(稀疏)自编码器后,希望通过可视化学习算法习得的函数,理解学习结果。考虑在10×10的图像上训练自编码器的例子,n=100。在该自编码器中,每个隐藏单元i将输入代入到以下函数进行计算:
我们得到训练好的(稀疏)自编码器后,希望通过可视化学习算法习得的函数,理解学习结果。考虑在10×10的图像上训练自编码器的例子,n=100。在该自编码器中,每个隐藏单元i将输入代入到以下函数进行计算:
-
【三校】:
+
:【三校】:
训练完(稀疏)自编码器,我们还想把这自编码器学到的函数可视化出来,好弄明白它到底学到了什么。我们以在10×10图像(即n=100)上训练自编码器为例。在该自编码器中,每个隐藏单元i对如下关于输入的函数进行计算:
训练完(稀疏)自编码器,我们还想把这自编码器学到的函数可视化出来,好弄明白它到底学到了什么。我们以在10×10图像(即n=100)上训练自编码器为例。在该自编码器中,每个隐藏单元i对如下关于输入的函数进行计算:
-
   
+
:<math>\begin{align}
-
【原文】:
+
a^{(2)}_i = f\left(\sum_{j=1}^{100} W^{(1)}_{ij} x_j + b^{(1)}_i \right).
 +
\end{align}</math>
 +
<!-- This is the activation function <math>\textstyle g(\cdot)</math> applied to an affine function of the input.!-->
 +
:【原文】:
We will visualize the function computed by hidden unit  ---which depends on the parameters  (ignoring the bias term for now)---using a 2D image. In particular, we think of  as some non-linear feature of the input  . We ask: What input image  would cause    to be maximally activated? (Less formally, what is the feature that hidden unit  is looking for?) For this question to have a non-trivial answer, we must impose some constraints on  . If we suppose that the input is norm constrained by  , then one can show (try doing this yourself) that the input which maximally activates hidden unit  is given by setting pixel  (for all 100 pixels,  ) to
We will visualize the function computed by hidden unit  ---which depends on the parameters  (ignoring the bias term for now)---using a 2D image. In particular, we think of  as some non-linear feature of the input  . We ask: What input image  would cause    to be maximally activated? (Less formally, what is the feature that hidden unit  is looking for?) For this question to have a non-trivial answer, we must impose some constraints on  . If we suppose that the input is norm constrained by  , then one can show (try doing this yourself) that the input which maximally activates hidden unit  is given by setting pixel  (for all 100 pixels,  ) to
【初译】:
【初译】:

Revision as of 10:50, 7 March 2013

Personal tools