Talk:UFLDL Tutorial

From Ufldl

Jump to: navigation, search
(Created page with "The first backprop assignment should be one doing classification with MNIST -- doing squared error with 1-hot label vectors.")
(Linear Decoder 二审: new section)
 
Line 1: Line 1:
The first backprop assignment should be one doing classification with MNIST -- doing squared error with 1-hot label vectors.
The first backprop assignment should be one doing classification with MNIST -- doing squared error with 1-hot label vectors.
 +
 +
As an extension to that assignment, they should then modify the loss/gradient to do cross-entropy / likelihood.
 +
 +
== Linear Decoders 初译 ==
 +
 +
Linear Decoders
 +
Sparse Autoencoder Recap
 +
线性解码器
 +
稀疏自编码重述
 +
In the sparse autoencoder, we had 3 layers of neurons: an input layer, a hidden layer and an output layer. In our previous description of autoencoders (and of neural networks), every neuron in the neural network used the same activation function. In these notes, we describe a modified version of the autoencoder in which some of the neurons use a different activation function. This will result in a model that is sometimes simpler to apply, and can also be more robust to variations in the parameters.
 +
Recall that each neuron (in the output layer) computed the following:
 +
 +
where a(3) is the output. In the autoencoder, a(3) is our approximate reconstruction of the input x = a(1).
 +
在稀疏自编码中,有三层:输入层,隐含层和输出层。在之前对自编码的定义(在神经网络中),位于神经网络中的每个神经元采用相同激励机制。在这些记录中,我们描述了一个修改版的自编码,其中一些神经元采用另外的激励机制。这产生一个更简易于应用,针对参数变化稳健性更佳的模型。
 +
每一个神经元(输出层)计算方式如下:
 +
 +
这里 a(3) 是输出. 在自编码中, a(3) 是对输入x = a(1).的近似重建。
 +
 +
Because we used a sigmoid activation function for f(z(3)), we needed to constrain or scale the inputs to be in the range [0,1], since the sigmoid function outputs numbers in the range [0,1]. While some datasets like MNIST fit well with this scaling of the output, this can sometimes be awkward to satisfy. For example, if one uses PCA whitening, the input is no longer constrained to [0,1] and it's not clear what the best way is to scale the data to ensure it fits into the constrained range.
 +
对于f(z(3))采用一个S型激励函数后,因S型函数输出值域为[0,1],需限制输入的范围为[0,1]。有一些数据组,例如MNIST手写数字库中其输入输出范围符合极佳,但这种情况难以满足。例如,若采用PCA白化,输入将不再限制于[0,1],虽可通过缩放数据来确保其符合特定范围内,显然,这不是最好的方式。
 +
One easy fix for this problem is to set a(3) = z(3). Formally, this is achieved by having the output nodes use an activation function that's the identity function f(z) = z, so that a(3) = f(z(3)) = z(3). This particular activation function  is called the linear activation function (though perhaps "identity activation function" would have been a better name). Note however that in the hidden layer of the network, we still use a sigmoid (or tanh) activation function, so that the hidden unit activations are given by (say) , where  is the sigmoid function, x is the input, and W(1) andb(1) are the weight and bias terms for the hidden units. It is only in the output layer that we use the linear activation function.
 +
一个简单的解决方案是设定:a(3) = z(3).型式上看,令输出端口使用一个激励机制,让其符合 f(z) = z,所以有a(3) = f(z(3)) = z(3). 这个特定的激励机制函数 被称为线性激励函数(这个“一致激励函数”也可以有一个更好的名字). 注意不管隐含层状态如何,我们都使用一个S型(或者是tanh)的激励函数,这样隐藏单元激励表达式为(表明) ,这里 是S型函数, x 是输入, W(1) 和b(1) 分别是隐单元的权重和偏项,我们仅在输出层中使用线性激励机制。
 +
 +
An autoencoder in this configuration--with a sigmoid (or tanh) hidden layer and a linear output layer--is called a linear decoder. In this model, we have . Because the output  is a now linear function of the hidden unit activations, by varying W(2), each output unit a(3) can be made to produce values greater than 1 or less than 0 as well. This allows us to train the sparse autoencoder real-valued inputs without needing to pre-scale every example to a specific range.
 +
Since we have changed the activation function of the output units, the gradients of the output units also change. Recall that for each output unit, we had set set the error terms as follows:
 +
 +
一个自编码器在这个设置中,有一个S型(或tanh)隐含层和一个线性输出层—被称为线性解码。在这个模型中,我们有。因为输出是一个包含隐单元的线性函数,通过改变W(2) 的值,每一个输出单元a(3) 可以用来构造产生大于1或者小于0的值。这让我们可以采用构造稀疏自编码实现实值输入而不需以特定范围来预限定每一个输入样。
 +
改变了输出单元的激励机制,这个输出单元梯度也随之改变了。回顾之前每一个输出单元,我们设定误差项如下:
 +
 +
where y = x is the desired output,  is the output of our autoencoder, and  is our activation function. Because in the output layer we now have f(z) =z, that implies f'(z) = 1 and thus the above now simplifies to:
 +
 +
Of course, when using backpropagation to compute the error terms for the hidden layer:
 +
 +
 +
Because the hidden layer is using a sigmoid (or tanh) activation f, in the equation above  should still be the derivative of the sigmoid (or tanh) function.
 +
 +
  这里y = x 是所期望的输出,  是自编码的输出,   是激励函数.因为在输出层f(z) =z, 取导 f'(z) = 1所以之前推导可以简化为:
 +
 +
当然,若使用反向传播算法来计算隐含层的误差项时:
 +
 +
因为隐含层采用一个S型(或tanh)的激励函数f,在上述表达式中,依然是S型(或tanh)函数的导数。
 +
 +
== Linear Decoder 二审 ==
 +
 +
Linear Decoders
 +
Sparse Autoencoder Recap
 +
线性解码器
 +
稀疏自编码重述
 +
(注解。个人意见,作为偏数学的资料,翻译稍显口语化。 基于我的了解,后面我给出了部分翻译。风格和你有差别。,请定夺)
 +
In the sparse autoencoder, we had 3 layers of neurons: an input layer, a hidden layer and an output layer. In our previous description of autoen-coders (and of neural networks), every neuron in the neural network used the same activation function. In these notes, we describe a modified version of the autoencoder in which some of the neurons use a different activation function. This will result in a model that is sometimes simpler to apply, and can also be more robust to variations in the parameters.
 +
Recall that each neuron (in the output layer) computed the following:
 +
 +
where a(3) is the output. In the autoencoder, a(3) is our approximate re-construction of the input x = a(1).
 +
在稀疏自编码中,有三层:输入层,隐含层和输出层。在之前对自编码的定义(在神经网络中),位于神经网络中的每个神经元采用相同激励函数机制函数。在这些记录中,我们描述了一个修改版的自编码,其中一些神经元采用另外的激励机制函数。这产生一个更简易于应用,针对参数变化稳健性更佳的模型。
 +
每一个神经元(输出层)计算方式如下:
 +
 +
这里 a(3) 是输出. 在自编码中, a(3) 是对输入x = a(1).的近似重建。
 +
稀疏自编码器包含3层神经元,分别是输入层,隐含层以及输出层。
 +
在前面自编码器以及神经网络的描述中,神经网络中的神经元采用的是相同的激励函数。
 +
同时,在注解中,我们修改了自编码器定义,其中某些神经元采用不同的激励函数。这样得到的模型更容易应用,同时模型对参数的变化也更为鲁棒。
 +
回想一下,输出层神经元计算公式如下:
 +
Because we used a sigmoid activation function for f(z(3)), we needed to constrain or scale the inputs to be in the range [0,1], since the sigmoid function outputs numbers in the range [0,1]. While some datasets like MNIST fit well with this scaling of the output, this can sometimes be awkward to satisfy. For example, if one uses PCA whitening, the input is no longer constrained to [0,1] and it's not clear what the best way is to scale the data to ensure it fits into the constrained range.
 +
对于f(z(3))采用一个S型激励函数后,因S型函数输出值域为[0,1],需限制输入的范围为[0,1]。有一些数据组,例如MNIST手写数字库中其输入输出范围符合极佳,但这种情况难以满足。例如,若采用PCA白化,输入将不再限制于[0,1],虽可通过缩放数据来确保其符合特定范围内,显然,这不是最好的方式。
 +
S型激励函数输出范围是[0,1],这样当f(z(3))采用该激励函数时,需要做输入约束或缩放,使其位于[0,1]范围中。一些数据集,比如MNIST,能方便将输出缩放到[0,1]中,但是在输入方面,很难满足要求。例如PCA白化的输入并不满足[0,1]范围要求,还不知道是否有最好的办法可以将数据所发到特定范围中。
 +
(注解。While some datasets like MNIST fit well with this scaling of the output, this can sometimes be awkward to satisfy.  这句原文翻译这种情况指代还是模糊
 +
One easy fix for this problem is to set a(3) = z(3). Formally, this is achieved by having the output nodes use an activation function that's the identity function f(z) = z, so that a(3) = f(z(3)) = z(3). This particular activation func-tion  is called the linear activation function (though perhaps "identity activation function" would have been a better name). Note however that in the hidden layer of the network, we still use a sigmoid (or tanh) activation function, so that the hidden unit activations are given by (say)  , where  is the sigmoid function, x is the input, and W(1) andb(1) are the weight and bias terms for the hidden units. It is only in the output layer that we use the linear activation function.
 +
一个简单的解决方案是设定:a(3) = z(3).形型式上看,令输出端口使用一个激励机制函数,让其符合 f(z) = z,所以有a(3) = f(z(3)) = z(3). 这个特定的激励机制函数函数  被称为线性激励函数(这个“一致激励函数”也可以有一个更好的名字). 注意不管隐含层状态如何,我们都使用一个S型(或者是tanh)的激励函数,这样隐隐含藏单元激励表达式为(表明)  ,这里  是S型函数, x 是输入, W(1) 和b(1) 分别是隐单元的权重和偏项,我们仅在输出层中使用线性激励机制函数。
 +
设定a(3)=z(3)可以很简单的解决上述问题。从形式上来说,输出端使用恒等函数f(z)=z
 +
作为激励函数可以解决该问题。我们称该特殊的激励函数为线性激励函数。(恒等激励函数可能更好些)
 +
然而神经网络的隐含层的神经元依然使用S型(或者tanh)激励函数。这样隐含单元的激励公式为
 +
An autoencoder in this configuration--with a sigmoid (or tanh) hidden layer and a linear output layer--is called a linear decoder. In this model, we have  . Because the output  is a now linear function of the hidden unit activations, by varying W(2), each output unit a(3) can be made to produce values greater than 1 or less than 0 as well. This allows us to train the sparse autoencoder real-valued inputs without needing to pre-scale every example to a specific range.
 +
Since we have changed the activation function of the output units, the gradients of the output units also change. Recall that for each output unit, we had set set the error terms as follows:
 +
 +
一个自编码器在这个设置中,有一个S型(或tanh)隐含层和一个线性输出层—被称为线性解码。在这个模型中,我们有 。因为输出 是一个包含隐单元的线性函数,通过改变W(2) 的值,每一个输出单元a(3) 可以用来构造产生大于1或者小于0的值。这让我们可以采用构造稀疏自编码实现实值输入而不需以特定范围来预限定每一个输入样。
 +
改变了输出单元的激励机制函数,这个输出单元梯度也随之改变了。回顾之前每一个输出单元,我们设定误差项如下:
 +
一个S型或tanh隐含层以及线性输出层构成的自编码器,我们称为线性解码器
 +
在这个线性解码器模型中, 。
 +
 +
where y = x is the desired output,  is the output of our autoencoder, and  is our activation function. Because in the output layer we now have f(z) =z, that implies f'(z) = 1 and thus the above now simplifies to:
 +
 +
Of course, when using backpropagation to compute the error terms for the hidden layer:
 +
 +
 +
Because the hidden layer is using a sigmoid (or tanh) acti-vation f, in the equation above  should still be the de-rivative of the sigmoid (or tanh) function.
 +
 +
这里y = x 是所期望的输出,  是自编码的输出,    是激励函数.因为在输出层f(z) =z, 取导 f'(z) = 1所以之前推导可以简化为:
 +
 +
当然,若使用反向传播算法来计算隐含层的误差项时:
 +
 +
因为隐含层采用一个S型(或tanh)的激励函数f,在上述表达式中, 依然是S型(或tanh)函数的导数。

Latest revision as of 14:39, 7 March 2013

Personal tools