稀疏自编码器符号一览表

From Ufldl

Jump to: navigation, search
Line 1: Line 1:
-
Here is a summary of the symbols used in our derivation of the sparse autoencoder:  
+
【原文】Here is a summary of the symbols used in our derivation of the sparse autoencoder:
 +
<br>【初译】下面是我们在推导稀疏自编码器时使用的符号一览表:
{| class="wikitable"
{| class="wikitable"
Line 9: Line 10:
|-
|-
| <math>\textstyle x</math>
| <math>\textstyle x</math>
-
| Input features for a training example, <math>\textstyle x \in \Re^{n}</math>.
+
|【原文】Input features for a training example, <math>\textstyle x \in \Re^{n}</math>.
 +
【初译】训练样本的输入特征,<math>\textstyle x \in \Re^{n}</math>.
 +
<br>【一审】一个训练样本的输入特征,<math>\textstyle x \in \Re^{n}</math>.
 +
<br>【初译说明】<math>\textstyle x^{(i)}</math>表示一个样本,虽然原文有‘a’,觉得译文不适合加“一个”.
|-
|-
| <math>\textstyle y</math>
| <math>\textstyle y</math>
-
| Output/target values.  Here, <math>\textstyle y</math> can be vector valued.  In the case of an autoencoder, <math>\textstyle y=x</math>.
+
| 【原文】Output/target values.  Here, <math>\textstyle y</math> can be vector valued.  In the case of an autoencoder, <math>\textstyle y=x</math>.
 +
【初译】输出值/目标值. 这里 <math>\textstyle y</math> 可以是向量. 在autoencoder中,<math>\textstyle y=x</math>.
 +
<br>【一审】输出值/目标值. 这里 <math>\textstyle y</math> 可以是向量. 在一个autoencoder中,<math>\textstyle y=x</math>.
|-
|-
| <math>\textstyle (x^{(i)}, y^{(i)})</math>
| <math>\textstyle (x^{(i)}, y^{(i)})</math>
-
| The <math>\textstyle i</math>-th training example
+
| 【原文】The <math>\textstyle i</math>-th training example
 +
【初译】第 <math>\textstyle i</math> 个训练样本
|-
|-
| <math>\textstyle h_{W,b}(x)</math>
| <math>\textstyle h_{W,b}(x)</math>
-
| Output of our hypothesis on input <math>\textstyle x</math>, using parameters <math>\textstyle W,b</math>.  This should be a vector of
+
|【原文】 Output of our hypothesis on input <math>\textstyle x</math>, using parameters <math>\textstyle W,b</math>.  This should be a vector of
the same dimension as the target value <math>\textstyle y</math>.
the same dimension as the target value <math>\textstyle y</math>.
 +
<br>【初译】输入为 <math>\textstyle x</math> 时的假设输出,其中包含参数 <math>\textstyle W,b</math>. 该向量应当与目标值 <math>\textstyle y</math> 具有相同的维数.
 +
<br>【一审】输入为 <math>\textstyle x</math> 时我们的假设输出,假设函数中包含参数 <math>\textstyle W,b</math>. 该输出向量应当与目标值 <math>\textstyle y</math> 具有相同的维数.
 +
<br>【初译说明】由于本篇是符号说明,“其中”指代符号本身应该不存在歧义.
|-
|-
| <math>\textstyle W^{(l)}_{ij}</math>
| <math>\textstyle W^{(l)}_{ij}</math>
-
| The parameter associated with the connection between unit <math>\textstyle j</math> in layer <math>\textstyle l</math>, and
+
| 【原文】The parameter associated with the connection between unit <math>\textstyle j</math> in layer <math>\textstyle l</math>, and
unit <math>\textstyle i</math> in layer <math>\textstyle l+1</math>.
unit <math>\textstyle i</math> in layer <math>\textstyle l+1</math>.
 +
<br>【初译】连接第 <math>\textstyle l</math> 层 <math>\textstyle j</math> 单元和第 <math>\textstyle l+1</math> 层 <math>\textstyle i</math> 单元的参数.
|-
|-
| <math>\textstyle b^{(l)}_{i}</math>
| <math>\textstyle b^{(l)}_{i}</math>
-
| The bias term associated with unit <math>\textstyle i</math> in layer <math>\textstyle l+1</math>.  Can also be thought of as the parameter associated with the connection between the bias unit in layer <math>\textstyle l</math> and unit <math>\textstyle i</math> in layer <math>\textstyle l+1</math>.
+
| 【原文】The bias term associated with unit <math>\textstyle i</math> in layer <math>\textstyle l+1</math>.  Can also be thought of as the parameter associated with the connection between the bias unit in layer <math>\textstyle l</math> and unit <math>\textstyle i</math> in layer <math>\textstyle l+1</math>.
 +
【初译】第 <math>\textstyle l+1</math> 层 <math>\textstyle i</math> 单元的偏置项. 也可以看作连接第 <math>\textstyle l</math> 层偏置单元和第 <math>\textstyle l+1</math> 层 <math>\textstyle i</math> 单元的参数.
 +
<br>【一审】第 <math>\textstyle l+1</math> 层 <math>\textstyle i</math> 单元的偏置项. 也可以看作是连接第 <math>\textstyle l</math> 层偏置单元和第 <math>\textstyle l+1</math> 层 <math>\textstyle i</math> 单元的参数.
|-
|-
| <math>\textstyle \theta</math>
| <math>\textstyle \theta</math>
-
| Our parameter vector.  It is useful to think of this as the result of taking the parameters <math>\textstyle W,b</math> and ``unrolling'' them into a long column vector.
+
| 【原文】Our parameter vector.  It is useful to think of this as the result of taking the parameters <math>\textstyle W,b</math> and ``unrolling'' them into a long column vector.
 +
【初译】参数向量. 可以认为该值是通过将参数 <math>\textstyle W,b</math> 展开为一个长的列向量而得到.
 +
<br>【一审】参数向量. 可以认为该向量是通过将参数 <math>\textstyle W,b</math> 组合展开为一个长的列向量而得到.
|-
|-
| <math>\textstyle a^{(l)}_i</math>
| <math>\textstyle a^{(l)}_i</math>
-
| Activation (output) of unit <math>\textstyle i</math> in layer <math>\textstyle l</math> of the network.
+
| 【原文】Activation (output) of unit <math>\textstyle i</math> in layer <math>\textstyle l</math> of the network.
-
In addition, since layer <math>\textstyle L_1</math> is the input layer, we also have <math>\textstyle a^{(1)}_i = x_i</math>.
+
【初译】网络中第 <math>\textstyle l</math> 层 <math>\textstyle i</math> 单元的激活值(输出).
 +
<br>【一审】网络中第 <math>\textstyle l</math> 层 <math>\textstyle i</math> 单元的激活(输出)值.
 +
 
 +
<br>【原文】In addition, since layer <math>\textstyle L_1</math> is the input layer, we also have <math>\textstyle a^{(1)}_i = x_i</math>.
 +
<br>【初译】另外,由于 <math>\textstyle L_1</math> 层是输入层,所以 <math>\textstyle a^{(1)}_i = x_i</math>.
 +
<br>【一审】另外,由于 <math>\textstyle L_1</math> 层是输入层,我们可以得到 <math>\textstyle a^{(1)}_i = x_i</math>.
 +
<br>【初译说明】这里采用“我们可以得到”感觉略生硬,因为无需推导,该公式是直观结果。
|-
|-
| <math>\textstyle f(\cdot)</math>
| <math>\textstyle f(\cdot)</math>
-
| The activation function.  Throughout these notes, we used <math>\textstyle f(z) = \tanh(z)</math>.
+
| 【原文】The activation function.  Throughout these notes, we used <math>\textstyle f(z) = \tanh(z)</math>.
 +
【初译】激活函数. 本文中我们使用 <math>\textstyle f(z) = \tanh(z)</math>.
 +
<br>【一审】激活函数. 在所有文章中我们统一使用 <math>\textstyle f(z) = \tanh(z)</math>.
|-
|-
| <math>\textstyle z^{(l)}_i</math>
| <math>\textstyle z^{(l)}_i</math>
-
| Total weighted sum of inputs to unit <math>\textstyle i</math> in layer <math>\textstyle l</math>.  Thus, <math>\textstyle a^{(l)}_i = f(z^{(l)}_i)</math>.
+
| 【原文】Total weighted sum of inputs to unit <math>\textstyle i</math> in layer <math>\textstyle l</math>.  Thus, <math>\textstyle a^{(l)}_i = f(z^{(l)}_i)</math>.
 +
【初译】第 <math>\textstyle l</math> 层 <math>\textstyle i</math> 单元所有输入的权重之和. 因此有 <math>\textstyle a^{(l)}_i = f(z^{(l)}_i)</math>.
 +
<br>【一审】第 <math>\textstyle l</math> 层 <math>\textstyle i</math> 单元所有输入的加权和. 因此有 <math>\textstyle a^{(l)}_i = f(z^{(l)}_i)</math>.
|-
|-
| <math>\textstyle \alpha</math>
| <math>\textstyle \alpha</math>
-
| Learning rate parameter
+
|【原文】 Learning rate parameter
 +
【初译】学习率
 +
<br>【一审】学习速率
|-
|-
| <math>\textstyle s_l</math>
| <math>\textstyle s_l</math>
-
| Number of units in layer <math>\textstyle l</math> (not counting the bias unit).
+
| 【原文】Number of units in layer <math>\textstyle l</math> (not counting the bias unit).
 +
【初译】第 <math>\textstyle l</math> 层的单元数目(不包含偏置单元).
|-
|-
| <math>\textstyle n_l</math>
| <math>\textstyle n_l</math>
-
| Number layers in the network.  Layer <math>\textstyle L_1</math> is usually the input layer, and layer <math>\textstyle L_{n_l}</math> the output layer.
+
| 【原文】Number layers in the network.  Layer <math>\textstyle L_1</math> is usually the input layer, and layer <math>\textstyle L_{n_l}</math> the output layer.
 +
【初译】网络中的层数. 通常 <math>\textstyle L_1</math> 是输入层,<math>\textstyle L_{n_l}</math> 是输出层.
 +
<br>【一审】网络中的层数. 通常 <math>\textstyle L_1</math> 层是输入层,<math>\textstyle L_{n_l}</math> 层是输出层.
|-
|-
| <math>\textstyle \lambda</math>
| <math>\textstyle \lambda</math>
-
| Weight decay parameter.
+
| 【原文】Weight decay parameter.
 +
【初译】
 +
<br>【一审】
|-
|-
| <math>\textstyle \hat{x}</math>
| <math>\textstyle \hat{x}</math>
-
| For an autoencoder, its output; i.e., its reconstruction of the input <math>\textstyle x</math>.  Same meaning as <math>\textstyle h_{W,b}(x)</math>.
+
| 【原文】For an autoencoder, its output; i.e., its reconstruction of the input <math>\textstyle x</math>.  Same meaning as <math>\textstyle h_{W,b}(x)</math>.
 +
【初译】
 +
<br>【一审】
|-
|-
| <math>\textstyle \rho</math>
| <math>\textstyle \rho</math>
-
| Sparsity parameter, which specifies our desired level of sparsity
+
| 【原文】Sparsity parameter, which specifies our desired level of sparsity
 +
【初译】
 +
<br>【一审】
|-
|-
| <math>\textstyle \hat\rho_i</math>
| <math>\textstyle \hat\rho_i</math>
-
| The average activation of hidden unit <math>\textstyle i</math> (in the sparse autoencoder).
+
| 【原文】The average activation of hidden unit <math>\textstyle i</math> (in the sparse autoencoder).
 +
【初译】
 +
<br>【一审】
|-
|-
| <math>\textstyle \beta</math>  
| <math>\textstyle \beta</math>  
-
| Weight of the sparsity penalty term (in the sparse autoencoder objective).
+
| 【原文】Weight of the sparsity penalty term (in the sparse autoencoder objective).
 +
【初译】
 +
<br>【一审】
|}
|}
{{Sparse_Autoencoder}}
{{Sparse_Autoencoder}}

Revision as of 13:31, 7 March 2013

Personal tools