数据预处理

From Ufldl

Jump to: navigation, search
Line 161: Line 161:
For ICA-based models with orthogonalization, it is ''very'' important for the data to be as close to white (identity covariance) as possible. This is a side-effect of using orthogonalization to decorrelate the features learned (more details in [[Independent Component Analysis | ICA]]). Hence, in this case, you will want to use an <tt>epsilon</tt> that is as small as possible (e.g., <math>epsilon = 1e-6</math>).
For ICA-based models with orthogonalization, it is ''very'' important for the data to be as close to white (identity covariance) as possible. This is a side-effect of using orthogonalization to decorrelate the features learned (more details in [[Independent Component Analysis | ICA]]). Hence, in this case, you will want to use an <tt>epsilon</tt> that is as small as possible (e.g., <math>epsilon = 1e-6</math>).
 +
【初译】
 +
对于正交化的基于ICA的模型,数据越接近白化(同协方差)越好,正交化来解相关特征是副作用 (详细内容请参考[[Independent Component Analysis | ICA]]一节)。因此在这种情况下需要采用尽量小的<tt>epsilon</tt>(如<math>epsilon = 1e-6</math>)。
 +
【一审】
 +
对基于正交化ICA的模型来说,保证输入数据尽可能地白化(即协方差矩阵为单位阵)非常重要。这是因为:这类模型需要对学习到的特征做正交化,以解除不同维度之间的相关性(详细内容请参考[[Independent Component Analysis | ICA]]一节)。因此在这种情况下,<tt>epsilon</tt>要足够小(比如<math>epsilon = 1e-6</math>)。
 +
 +
【原文】
{{Quote|
{{Quote|
Tip: In PCA whitening, one also has the option of performing dimension reduction while whitening the data. This is usually an excellent idea since it can greatly speed up the algorithms (less computation and less parameters). A simple rule of thumb to choose how many principle components to retain is to keep enough components to have 99% of the variance retained (more details at [[PCA#Number_of_components_to_retain | PCA]])
Tip: In PCA whitening, one also has the option of performing dimension reduction while whitening the data. This is usually an excellent idea since it can greatly speed up the algorithms (less computation and less parameters). A simple rule of thumb to choose how many principle components to retain is to keep enough components to have 99% of the variance retained (more details at [[PCA#Number_of_components_to_retain | PCA]])
}}
}}
 +
【初译】
 +
{{Quote|
 +
提示: 在主成分分析白化中,在白化数据的过程中也可以降低数据维度。这是一个很好的主意,因为这将大大提升算法的速度(更少的运算和更少的参数)。一个选取保留主成分数目的简单规则是使剩余的方差达到99%以上。(详细内容请参考[[PCA#Number_of_components_to_retain | PCA]])
 +
}}
 +
 +
【一审】
 +
{{Quote|
 +
提示:我们可以在PCA白化过程中同时降低数据的维度。这是一个很好的主意,因为这样可以大大提升算法的速度(减少了运算量和参数数目)。确定要保留的主成分数目有一个简单的规则:即所保留的成分的总方差达到总样本方差的99%以上。(详细内容请参考[[PCA#Number_of_components_to_retain | PCA]])
 +
}}
 +
【原文】
{{quote|
{{quote|
Note: When working in a classification framework, one should compute the PCA/ZCA whitening matrices based only on the training set. The following parameters used be saved for use with the test set: (a) average vector that was used to zero-mean the data, (b) whitening matrices. The test set should undergo the same preprocessing steps using these saved values.  }}
Note: When working in a classification framework, one should compute the PCA/ZCA whitening matrices based only on the training set. The following parameters used be saved for use with the test set: (a) average vector that was used to zero-mean the data, (b) whitening matrices. The test set should undergo the same preprocessing steps using these saved values.  }}
-
== Large Images ==
+
【初译】
 +
{{quote|
 +
注意: 在处理分类框架时,需要在训练集合上计算PCA/ZCA白化矩阵,需要保存以下两个参数留待测试集合使用(a)平均向量用于零均值化数据;(b)白化矩阵。测试集需要采用保存的参数来进行相同的预处理。}}
 +
 
 +
【一审】
 +
{{quote|
 +
注意: 在分类问题中,PCA/ZCA白化矩阵是在训练集合上计算的,需要保存以下两个参数留待测试集合使用:(a)样本均值;(b)白化矩阵。测试集需要采用这两组保存的参数来进行相同的预处理。}}
 +
== Large Images/大图像 ==
 +
【原文】
For large images, PCA/ZCA based whitening methods are impractical as the covariance matrix is too large. For these cases, we defer to 1/f-whitening methods. (more details to come)
For large images, PCA/ZCA based whitening methods are impractical as the covariance matrix is too large. For these cases, we defer to 1/f-whitening methods. (more details to come)
 +
【初译】
 +
对于大图像,采用基于PCA/ZCA的白化方法是不实际的,这是因为协方差矩阵太大。在这些情况下我们推荐1/f 白化方法(更多内容后续再讲)。
-
== Standard Pipelines ==
+
【一审】
 +
对于大图像,采用基于PCA/ZCA的白化方法是不切实际的,因为协方差矩阵太大。在这些情况下我们推荐1/f 白化方法(更多内容后续再讲)。
 +
== Standard Pipelines/标准流程 ==
 +
【原文】
In this section, we describe several "standard pipelines" that have worked well for some datasets:
In this section, we describe several "standard pipelines" that have worked well for some datasets:
-
=== Natural Grey-scale Images ===
+
【初译】
 +
在这一部分我们将介绍几种在一些数据集上有效地标准流程
 +
【一审】
 +
在这一部分我们将介绍几种在一些数据集上有良好表现的预处理标准流程
 +
 +
=== Natural Grey-scale Images/自然灰度图像 ===
 +
【原文】
Since grey-scale images have the stationarity property, we usually first remove the mean-component from each data example separately (remove DC). After this step, PCA/ZCA whitening is often employed with a value of <tt>epsilon</tt> set large enough to low-pass filter the data.
Since grey-scale images have the stationarity property, we usually first remove the mean-component from each data example separately (remove DC). After this step, PCA/ZCA whitening is often employed with a value of <tt>epsilon</tt> set large enough to low-pass filter the data.
-
=== Color Images ===
+
【初译】
 +
因为灰度图像具有平稳特性,我们第一步通常在样本上分别移除均值项,然后采用PCA/ZCA白化处理,其中的<tt>epsilon</tt>足够大以对数据低通过滤。
 +
【一审】
 +
灰度图像具有平稳特性,我们通常在对每个样本做分量均值归零化(即减去直流分量),然后采用PCA/ZCA白化处理,其中的<tt>epsilon</tt>要足够大以达到低通滤波的效果。
 +
 +
=== Color Images/彩色图像 ===
 +
【原文】
For color images, the stationarity property does not hold across color channels. Hence, we usually start by rescaling the data (making sure it is in <math>[0, 1]</math>) ad then applying PCA/ZCA with a sufficiently large <tt>epsilon</tt>. Note that it is important to perform feature mean-normalization before computing the PCA transformation.
For color images, the stationarity property does not hold across color channels. Hence, we usually start by rescaling the data (making sure it is in <math>[0, 1]</math>) ad then applying PCA/ZCA with a sufficiently large <tt>epsilon</tt>. Note that it is important to perform feature mean-normalization before computing the PCA transformation.
-
=== Audio (MFCC/Spectrograms) ===
+
【初译】
 +
对于彩色图像,色彩通道间并不存在平稳特性。因此我们通常首先对数据进行重缩放(使之位于<math>[0, 1]</math>区间),然后在使用足够大的<tt>epsilon</tt>来做PCA/ZCA。值得注意的是在进行PCA转换前需要对特征进行均值归一化。
 +
【一审】
 +
对于彩色图像,彩色通道间并不存在平稳特性。因此我们通常首先对数据进行特征缩放(使像素值位于<math>[0, 1]</math>区间),然后使用足够大的<tt>epsilon</tt>来做PCA/ZCA。注意在进行PCA变换前需要对特征进行分量均值归零化。
 +
 +
【说明】
 +
原文中说的mean-normalization在整个文档中没有别的地方提及。我认为这是指分量均值归零化,使得图像的平均像素值变为0。即:对彩色图像的处理需要首先把像素值变换到[0,1]区间,然后按照灰度图像的同样方法做预处理。
 +
 +
=== Audio (MFCC/Spectrograms)/音频 (MFCC/频谱图) ===
 +
【原文】
For audio data (MFCC and Spectrograms), each dimension usually have different scales (variances); the first component of MFCCs, for example, is the DC component and usually has a larger magnitude than the other components. This is especially so when one includes the temporal derivatives (a common practice in audio processing). As a result, the preprocessing usually starts with simple data standardization (zero-mean, unit-variance per data dimension), followed by PCA/ZCA whitening (with an appropriate <tt>epsilon</tt>).
For audio data (MFCC and Spectrograms), each dimension usually have different scales (variances); the first component of MFCCs, for example, is the DC component and usually has a larger magnitude than the other components. This is especially so when one includes the temporal derivatives (a common practice in audio processing). As a result, the preprocessing usually starts with simple data standardization (zero-mean, unit-variance per data dimension), followed by PCA/ZCA whitening (with an appropriate <tt>epsilon</tt>).
-
=== MNIST Handwritten Digits ===
+
【初译】
 +
对于音频数据 (MFCC 和频谱图),每一维度的尺寸不同(方差)。例如FMCC的第一组件,作为直流成分通常其幅度会远比其他成分大,当包含时域微分时(音频处理中常用方法)尤为如此。结果是预处理通常从简单的数据标准化开始(是数据每一维度都呈零均值和单位方差),然后进行PCA/ZCA白化(使用合适的<tt>epsilon</tt>)。
 +
【一审】
 +
对于音频数据 (MFCC 和频谱图),每一维度的尺度(方差)不同。例如MFCC的第一分量是直流分量,通常其幅度远大于其他分量,尤其当特征中包含时域导数时(这是音频处理中的常用方法)更是如此。因此,对这类数据的预处理通常从简单的数据标准化开始(即使得数据的每一维度均值为零、方差为1),然后使用合适的<tt>epsilon</tt>进行PCA/ZCA白化。
 +
 +
=== MNIST Handwritten Digits/MNIST 手写数字 ===
 +
【原文】
The MNIST dataset has pixel values in the range <math>[0, 255]</math>. We thus start with simple rescaling to shift the data into the range <math>[0, 1]</math>. In practice, removing the mean-value per example can also help feature learning. ''Note: While one could also elect to use PCA/ZCA whitening on MNIST if desired, this is not often done in practice.''
The MNIST dataset has pixel values in the range <math>[0, 255]</math>. We thus start with simple rescaling to shift the data into the range <math>[0, 1]</math>. In practice, removing the mean-value per example can also help feature learning. ''Note: While one could also elect to use PCA/ZCA whitening on MNIST if desired, this is not often done in practice.''
 +
 +
【初译】
 +
MNIST数据集的像素值在 <math>[0, 255]</math>区间中。我们首先将之重缩放到<math>[0, 1]</math>区间,实际情况中移除均值也有助于特征学习。''值得注意的是对MNIST进行PCA/ZCA白化也是可行的,但在实践中不常用。''
 +
 +
【一审】
 +
MNIST数据集的像素值在 <math>[0, 255]</math>区间中。我们首先将其缩放到<math>[0, 1]</math>区间。实际上,对这些数据做分量均值归零化也有助于特征学习。''注:也可以对MNIST进行PCA/ZCA白化,但这在实践中不常用。''

Revision as of 08:53, 8 March 2013

Personal tools