数据预处理

From Ufldl

Jump to: navigation, search
 
Line 1: Line 1:
-
初译:@gausschen
+
== 概要 ==
-
一审:@咖灰茶
+
 +
数据预处理在众多深度学习算法中都起着重要作用,实际情况中,将数据做归一化和白化处理后,很多算法能够发挥最佳效果。然而除非对这些算法有丰富的使用经验,否则预处理的精确参数并非显而易见。在本页中,我们希望能够揭开预处理方法的神秘面纱,同时为预处理数据提供技巧(和标准流程)。
-
== Overview/概要 ==
 
-
【原文】
 
-
Data preprocessing plays a very important in many deep learning algorithms. In practice, many methods work best after the data has been normalized and whitened. However, the exact parameters for data preprocessing are usually not immediately apparent unless one has much experience working with the algorithms. In this page, we hope to demystify some of the preprocessing methods and also provide tips (and a "standard pipeline") for preprocessing data.
 
-
 
-
【初译】
 
-
 
-
数据预处理在众多深度学习算法中都起着重要作用,实际情况中,将数据做归一化和白化处理后,很多算法能够发挥最佳效果。然而除非对这些算法有丰富的使用经验,否则预处理的精确参数并非显而易见。在本页中,我们希望能够揭开预处理方法的神秘面纱,同时为预处理数据提供技巧(和标准流程)
 
-
 
-
【一审】
 
-
 
-
数据预处理在众多深度学习算法中都起着重要作用,实际情况中,将数据做归一化和白化处理后,很多算法能够发挥最佳效果。然而除非对这些算法有丰富的使用经验,否则预处理的精确参数并非显而易见。在本页中,我们希望能够揭开预处理方法的神秘面纱,同时为预处理数据提供技巧(和标准流程)
 
-
 
-
【原文】
 
{{quote |
{{quote |
-
Tip: When approaching a dataset, the first thing to do is to look at the data itself and observe its properties. While the techniques here apply generally, you might want to opt to do certain things differently given your dataset. For example, one standard preprocessing trick is to subtract the mean of each data point from itself (also known as remove DC, local mean subtraction, subtractive normalization). While this makes sense for data such as natural images, it is less obvious for data where stationarity does not hold.
+
提示:当我们开始处理数据时,首先要做的事是观察数据并获知其特性。本部分将介绍一些通用的技术,在实际中应该针对具体数据选择合适的预处理技术。例如一种标准的预处理方法是对每一个数据点都减去它的均值(也被称为移除直流分量,局部均值消减,消减归一化),这一方法对诸如自然图像这类数据是有效的,但对非平稳的数据则不然。
}}
}}
-
【初译】
 
-
{{quote |
 
-
提示:获得数据后首先要做的事是查看数据并获知其特性,而后针对数据选择采取相应的处理。例如一个标准的预处理方法是减去所有数据点的均值(也被称为移除直流,局部均值消减,消减归一化),这一方法对一些数据是有效的,如自然图像,但对非平稳的数据并非如此。
 
-
}}
 
-
【一审】
+
== 数据归一化 ==
-
{{quote |
+
-
提示:获得数据后首先要做的事是观察数据并获知其特性。本部分将介绍一些通用的技术,在实际中应该针对具体数据选择合适的预处理技术。例如一种标准的预处理方法是对每一个数据点都减去它的均值(也被称为移除直流分量,局部均值消减,消减归一化),这一方法对诸如自然图像这类数据是有效的,但对非平稳的数据则不然。
+
-
}}
+
 +
数据预处理中,标准的第一步是数据归一化。虽然这里有一系列可行的方法,但是这一步通常是根据数据的具体情况而明确选择的。特征归一化常用的方法包含如下几种:
-
== Data Normalization/数据归一化 ==
+
* 简单缩放
-
【原文】
+
* 逐样本均值消减(也称为移除直流分量)
-
A standard first step to data preprocessing is data normalization. While there are a few possible approaches, this step is usually clear depending on the data. The common methods for feature normalization are:
+
* 特征标准化(使数据集中所有特征都具有零均值和单位方差)
-
* Simple Rescaling
 
-
* Per-example mean subtraction (a.k.a. remove DC)
 
-
* Feature Standardization (zero-mean and unit variance for each feature across the dataset)
 
-
【初译】
+
=== 简单缩放 ===
-
数据预处理标准的第一步是数据归一化,由于已有一些适用的方法,根据数据的情况这一步通常是清晰地。特征归一化常用的方法包含如下几种:
+
-
* 简单重缩放
+
在简单缩放中,我们的目的是通过对数据的每一个维度的值进行重新调节(这些维度可能是相互独立的),使得最终的数据向量落在 <math>[0, 1]</math>或<math>[-1, 1]</math> 的区间内(根据数据情况而定)。这对后续的处理十分重要,因为很多''默认''参数(如 PCA-白化中的 epsilon)都假定数据已被缩放到合理区间。
-
* 上例中的均值消减(也被称为移除直流)
+
-
* 特征标准化(使数据集中所有特征都具有零均值和单位方差)
+
-
【一审】
+
'''例子:'''在处理自然图像时,我们获得的像素值在 <math>[0, 255]</math> 区间中,常用的处理是将这些像素值除以 255,使它们缩放到 <math>[0, 1]</math> 中.
-
数据预处理的标准的第一步是数据归一化。已有一些常用的方法,根据数据的具体情况可以明确地确定这一步可以采用的方法。特征归一化常用的方法包含如下几种:
+
-
* 特征缩放
 
-
* 分量均值归一化(也称为移除直流分量)
 
-
* 特征标准化(使数据集中所有特征都具有零均值和单位方差)
 
-
=== Simple Rescaling/特征缩放 ===
+
=== 逐样本均值消减 ===
-
【原文】
+
-
In simple rescaling, our goal is to rescale the data along each data dimension (possibly independently) so that the final data vectors lie in the range <math>[0, 1]</math> or  <math>[-1, 1]</math>  (depending on your dataset). This is useful for later processing as many ''default'' parameters (e.g., epsilon in PCA-whitening) treat the data as if it has been scaled to a reasonable range.
+
-
'''Example: ''' When processing natural images, we often obtain pixel values in the range <math>[0, 255]</math>. It is a common operation to rescale these values to  <math>[0, 1]</math> by dividing the data by 255.
+
如果你的数据是''平稳''的(即数据每一个维度的统计都服从相同分布),那么你可以考虑在每个样本上减去数据的统计平均值(逐样本计算)。
-
【初译】
+
'''例子:'''对于图像,这种归一化可以移除图像的平均亮度值 (intensity)。很多情况下我们对图像的照度并不感兴趣,而更多地关注其内容,这时对每个数据点移除像素的均值是有意义的。'''注意:'''虽然该方法广泛地应用于图像,但在处理彩色图像时需要格外小心,具体来说,是因为不同色彩通道中的像素并不都存在平稳特性。
-
简单重缩放的目的在于通过在每一维度上(可能相互独立)对数据进行的重缩放,使得最终的数据向量落在<math>[0, 1]</math>或<math>[-1, 1]</math>的区间内(根据数据情况)。这对后续的处理十分重要,因为很多默认参数(如主成分分析-白化中的epsilon) 都基于数据已被缩放到合理区间的假定。
+
-
=== Per-example mean subtraction ===
+
-
If your data is ''stationary'' (i.e., the statistics for each data dimension follow the same distribution), then you might want to consider subtracting the mean-value for each example (computed per-example).
 
-
'''Example:''' In images, this normalization has the property of removing the average brightness (intensity) of the data point. In many cases, we are not interested in the illumination conditions of the image, but more so in the content; removing the average pixel value per data point makes sense here. '''Note:''' While this method is generally used for images, one might want to take more care when applying this to color images. In particular, the stationarity property does not generally apply across pixels in different color channels.
+
=== 特征标准化 ===
-
=== Feature Standardization ===
+
特征标准化指的是(独立地)使得数据的每一个维度具有零均值和单位方差。这是归一化中最常见的方法并被广泛地使用(例如,在使用支持向量机(SVM)时,特征标准化常被建议用作预处理的一部分)。在实际应用中,特征标准化的具体做法是:首先计算每一个维度上数据的均值(使用全体数据计算),之后在每一个维度上都减去该均值。下一步便是在数据的每一维度上除以该维度上数据的标准差。
-
Feature standardization refers to (independently) setting each dimension of the data to have zero-mean and unit-variance. This is the most common method for normalization and is generally used widely (e.g., when working with SVMs, feature standardization is often recommended as a preprocessing step). In practice, one achieves this by first computing the mean of each dimension (across the dataset) and subtracts this from each dimension. Next, each dimension is divided by its standard deviation.  
+
'''例子''':处理音频数据时,常用 Mel 倒频系数 [http://en.wikipedia.org/wiki/Mel-frequency_cepstrum MFCCs] 来表征数据。然而MFCC特征的第一个分量(表示直流分量)数值太大,常常会掩盖其他分量。这种情况下,为了平衡各个分量的影响,通常对特征的每个分量独立地使用标准化处理。
-
'''Example: ''' When working with audio data, it is common to use [http://en.wikipedia.org/wiki/Mel-frequency_cepstrum MFCCs] as the data representation. However, the first component (representing the DC) of the MFCC features often overshadow the other components. Thus, one method to restore balance to the components is to standardize the values in each component independently.
 
 +
== PCA/ZCA白化 ==
-
== PCA/ZCA Whitening ==
+
在做完简单的归一化后,白化通常会被用来作为接下来的预处理步骤,它会使我们的算法工作得更好。实际上许多深度学习算法都依赖于白化来获得好的特征。
-
After doing the simple normalizations, whitening is often the next preprocessing step employed that helps make our algorithms work better. In practice, many deep learning algorithms rely on whitening to learn good features.
+
在进行 PCA/ZCA 白化时,首先使特征零均值化是很有必要的,这保证了 <math> \frac{1}{m} \sum_i x^{(i)} = 0 </math>。特别地,这一步需要在计算协方差矩阵前完成。(唯一例外的情况是已经进行了逐样本均值消减,并且数据在各维度上或像素上是平稳的。)
-
In performing PCA/ZCA whitening, it is pertinent to first zero-mean the features (across the dataset) to ensure that <math> \frac{1}{m} \sum_i x^{(i)} = 0 </math>. Specifically, this should be done before computing the covariance matrix. (The only exception is when per-example mean subtraction is performed and the data is stationary across dimensions/pixels.)
+
接下来在 PCA/ZCA 白化中我们需要选择合适的 <tt>epsilon</tt>(回忆一下,这是规则化项,对数据有低通滤波作用)。 选取合适的 <tt>epsilon</tt> 值对特征学习起着很大作用,下面讨论在两种不同场合下如何选取 <tt>epsilon</tt>:
-
Next, one needs to select the value of <tt>epsilon</tt> to use when performing [[Whitening | PCA/ZCA whitening]] (recall that this was the regularization term that has an effect of ''low-pass filtering'' the data). It turns out that selecting this value can also play an important role for feature learning, we discuss two cases for selecting <tt>epsilon</tt>:
 
-
=== Reconstruction Based Models ===
+
=== 基于重构的模型 ===
-
In models based on reconstruction (including Autoencoders, Sparse Coding, RBMs, k-Means), it is often preferable to set <tt>epsilon</tt> to a value such that low-pass filtering is achieved. One way to check this is to set a value for <tt>epsilon</tt>, run ZCA whitening, and thereafter visualize the data before and after whitening. If the value of epsilon is set too low, the data will look very noisy; conversely, if <tt>epsilon</tt> is set too high, you will see a "blurred" version of the original data. A good way to get a feel for the magnitude of <tt>epsilon</tt> to try is to plot the eigenvalues on a graph. As visible in the example graph below, you may get a "long tail" corresponding to the high frequency noise components. You will want to choose <tt>epsilon</tt> such that most of the "long tail" is filtered out, i.e. choose <tt>epsilon</tt> such that it is greater than most of the small eigenvalues corresponding to the noise.
+
在基于重构的模型中(包括自编码器,稀疏编码,受限 Boltzman 机(RBM),k-均值(K-Means)),经常倾向于选取合适的 <tt>epsilon</tt> 以使得白化达到低通滤波的效果。(译注:通常认为数据中的高频分量是噪声,低通滤波的作用就是尽可能抑制这些噪声,同时保留有用的信息。在 PCA 等方法中,假设数据的信息主要分布在方差较高的方向,方差较低的方向是噪声(即高频分量),因此后文中 <tt>epsilon</tt> 的选择与特征值有关)。一种检验 <tt>epsilon</tt> 是否合适的方法是用该值对数据进行 ZCA 白化,然后对白化前后的数据进行可视化。如果 <tt>epsilon</tt> 值过低,白化后的数据会显得噪声很大;相反,如果 <tt>epsilon</tt> 值过高,白化后的数据与原始数据相比就过于模糊。一种直观上得到 <tt>epsilon</tt> 大小的方法是以图形方式画出数据的特征值,如下图的例子所示,你可以看到一条"长尾",它对应于数据中的高频噪声部分。你需要选取合适的 <tt>epsilon</tt>,使其能够在很大程度上过滤掉这条"长尾",也就是说,选取的 <tt>epsilon</tt> 应大于大多数较小的、反映数据中噪声的特征值。
[[File:ZCA_Eigenvalues_Plot.png]]
[[File:ZCA_Eigenvalues_Plot.png]]
-
In reconstruction based models, the loss function includes a term that penalizes reconstructions that are far from the original inputs. Then, if <tt>epsilon</tt> is set too ''low'', the data will contain a lot of noise which the model will need to reconstruct well. As a result, it is very important for reconstruction based models to have data that has been low-pass filtered.
+
在基于重构的模型中,损失函数有一项是用于惩罚那些与原始输入数据差异较大的重构结果(译注:以自动编码机为例,要求输入数据经过编码和解码之后还能尽可能的还原输入数据)。如果 <tt>epsilon</tt> 太小,白化后的数据中就会包含很多噪声,而模型要拟合这些噪声,以达到很好的重构结果。因此,对于基于重构的模型来说,对原始数据进行低通滤波就显得非常重要。
{{Quote|
{{Quote|
-
Tip: If your data has been scaled reasonably (e.g., to <math>[0, 1]</math>), start with <math>epsilon = 0.01</math> or <math>epsilon = 0.1</math>.
+
提示:如果数据已被缩放到合理范围(<math>[0, 1]</math>),可以从<math>epsilon = 0.01</math><math>epsilon = 0.1</math>开始调节<tt>epsilon</tt>。
}}
}}
-
=== ICA-based Models (with orthogonalization) ===
 
-
For ICA-based models with orthogonalization, it is ''very'' important for the data to be as close to white (identity covariance) as possible. This is a side-effect of using orthogonalization to decorrelate the features learned (more details in [[Independent Component Analysis | ICA]]). Hence, in this case, you will want to use an <tt>epsilon</tt> that is as small as possible (e.g., <math>epsilon = 1e-6</math>).
+
=== 基于正交化ICA的模型 ===
 +
对基于正交化ICA的模型来说,保证输入数据尽可能地白化(即协方差矩阵为单位矩阵)非常重要。这是因为:这类模型需要对学习到的特征做正交化,以解除不同维度之间的相关性(详细内容请参考 [[Independent Component Analysis | ICA ]] 一节)。因此在这种情况下,<tt>epsilon</tt> 要足够小(比如 <math>epsilon = 1e-6</math>)。
{{Quote|
{{Quote|
-
Tip: In PCA whitening, one also has the option of performing dimension reduction while whitening the data. This is usually an excellent idea since it can greatly speed up the algorithms (less computation and less parameters). A simple rule of thumb to choose how many principle components to retain is to keep enough components to have 99% of the variance retained (more details at [[PCA#Number_of_components_to_retain | PCA]])
+
提示:我们也可以在PCA白化过程中同时降低数据的维度。这是一个很好的主意,因为这样可以大大提升算法的速度(减少了运算量和参数数目)。确定要保留的主成分数目有一个经验法则:即所保留的成分的总方差达到总样本方差的 99% 以上。(详细内容请参考[[PCA#Number_of_components_to_retain | PCA ]])
}}
}}
-
{{quote|
+
{{Quote|
-
Note: When working in a classification framework, one should compute the PCA/ZCA whitening matrices based only on the training set. The following parameters used be saved for use with the test set: (a) average vector that was used to zero-mean the data, (b) whitening matrices. The test set should undergo the same preprocessing steps using these saved values.  }}
+
注意: 在使用分类框架时,我们应该只基于练集上的数据计算PCA/ZCA白化矩阵。需要保存以下两个参数留待测试集合使用:(a)用于零均值化数据的平均值向量;(b)白化矩阵。测试集需要采用这两组保存的参数来进行相同的预处理。}}
 +
 
 +
 
 +
== 大图像 ==
 +
 
 +
对于大图像,采用基于 PCA/ZCA 的白化方法是不切实际的,因为协方差矩阵太大。在这些情况下我们退而使用 1/f 白化方法(更多内容后续再讲)。
 +
 
 +
 
 +
== 标准流程 ==
 +
 
 +
在这一部分中,我们将介绍几种在一些数据集上有良好表现的预处理标准流程.
 +
 
 +
 
 +
=== 自然灰度图像 ===
 +
 
 +
灰度图像具有平稳特性,我们通常在第一步对每个数据样本分别做均值消减(即减去直流分量),然后采用 PCA/ZCA 白化处理,其中的 <tt>epsilon</tt> 要足够大以达到低通滤波的效果。
 +
 
 +
 
 +
=== 彩色图像 ===
 +
 
 +
对于彩色图像,色彩通道间并不存在平稳特性。因此我们通常首先对数据进行特征缩放(使像素值位于 <math>[0, 1]</math> 区间),然后使用足够大的 <tt>epsilon</tt> 来做 PCA/ZCA。注意在进行 PCA 变换前需要对特征进行分量均值归零化。
 +
 
 +
 
 +
=== 音频 (MFCC/频谱图) ===
 +
 
 +
对于音频数据 (MFCC 和频谱图),每一维度的取值范围(方差)不同。例如 MFCC 的第一分量是直流分量,通常其幅度远大于其他分量,尤其当特征中包含时域导数 (temporal derivatives) 时(这是音频处理中的常用方法)更是如此。因此,对这类数据的预处理通常从简单的数据标准化开始(即使得数据的每一维度均值为零、方差为 1),然后进行 PCA/ZCA 白化(使用合适的 <tt>epsilon</tt>)。
 +
 
 +
 
 +
=== MNIST 手写数字 ===
 +
 
 +
MNIST 数据集的像素值在 <math>[0, 255]</math> 区间中。我们首先将其缩放到 <math>[0, 1]</math> 区间。实际上,进行逐样本均值消去也有助于特征学习。''注:也可选择以对 MNIST 进行 PCA/ZCA 白化,但这在实践中不常用。''
 +
 
 +
 
 +
==中英文对照==
 +
 
 +
:归一化 normalization
 +
 
 +
:白化 whitening
 +
 
 +
:直流分量 DC component
 +
 
 +
:局部均值消减 local mean subtraction
 +
 
 +
:消减归一化 sparse autoencoder
 +
 
 +
:缩放 rescaling
 +
 
 +
:逐样本均值消减 per-example mean subtraction
 +
 
 +
:特征标准化 feature standardization
 +
 
 +
:平稳 stationary
 +
 
 +
:Mel倒频系数 MFCC
 +
 
 +
:零均值化 zero-mean
 +
 
 +
:低通滤波 low-pass filtering
-
== Large Images ==
+
:基于重构的模型 reconstruction based models
-
For large images, PCA/ZCA based whitening methods are impractical as the covariance matrix is too large. For these cases, we defer to 1/f-whitening methods. (more details to come)
+
:自编码器 autoencoders
 +
:稀疏编码 sparse coding
-
== Standard Pipelines ==
+
:受限Boltzman机 RBMs
-
In this section, we describe several "standard pipelines" that have worked well for some datasets:
+
:k-均值 k-Means
-
=== Natural Grey-scale Images ===
+
:长尾 long tail
-
Since grey-scale images have the stationarity property, we usually first remove the mean-component from each data example separately (remove DC). After this step, PCA/ZCA whitening is often employed with a value of <tt>epsilon</tt> set large enough to low-pass filter the data.
+
:损失函数 loss function
-
=== Color Images ===
+
:正交化 orthogonalization
-
For color images, the stationarity property does not hold across color channels. Hence, we usually start by rescaling the data (making sure it is in <math>[0, 1]</math>) ad then applying PCA/ZCA with a sufficiently large <tt>epsilon</tt>. Note that it is important to perform feature mean-normalization before computing the PCA transformation.
 
-
=== Audio (MFCC/Spectrograms) ===
+
==中文译者==
-
For audio data (MFCC and Spectrograms), each dimension usually have different scales (variances); the first component of MFCCs, for example, is the DC component and usually has a larger magnitude than the other components. This is especially so when one includes the temporal derivatives (a common practice in audio processing). As a result, the preprocessing usually starts with simple data standardization (zero-mean, unit-variance per data dimension), followed by PCA/ZCA whitening (with an appropriate <tt>epsilon</tt>).
+
陈磊(lei.chen@operasolutions.com), 王文中(wangwenzhong@ymail.com), 王方(fangkey@gmail.com)
-
=== MNIST Handwritten Digits ===
 
-
The MNIST dataset has pixel values in the range <math>[0, 255]</math>. We thus start with simple rescaling to shift the data into the range <math>[0, 1]</math>. In practice, removing the mean-value per example can also help feature learning. ''Note: While one could also elect to use PCA/ZCA whitening on MNIST if desired, this is not often done in practice.''
+
{{Languages|Data_Preprocessing|English}}

Latest revision as of 04:22, 8 April 2013

Personal tools