Whitening

From Ufldl

Jump to: navigation, search
 
Line 5: Line 5:
which is needed for some algorithms.  If we are training on images,
which is needed for some algorithms.  If we are training on images,
the raw input is redundant, since adjacent pixel values
the raw input is redundant, since adjacent pixel values
-
are highly correlated.  The goal of whitening is to make the input less redundant,  
+
are highly correlated.  The goal of whitening is to make the input less redundant; more formally,
-
so that our learning algorithms sees a training input where (i) the features are less
+
our desiderata are that our learning algorithms sees a training input where (i) the features are less
correlated with each other, and (ii) the features all have the same variance.
correlated with each other, and (ii) the features all have the same variance.
Line 40: Line 40:
Further,  
Further,  
the off-diagonal entries are zero; thus,  
the off-diagonal entries are zero; thus,  
-
<math>\textstyle x_{{\rm rot},1}</math> and <math>\textstyle x_{{\rm rot},2}</math> are uncorrelated, satisfying one of
+
<math>\textstyle x_{{\rm rot},1}</math> and <math>\textstyle x_{{\rm rot},2}</math> are uncorrelated, satisfying one of our desiderata  
-
our desiderata for whitened data.
+
for whitened data (that the features be less correlated).
To make each of our input features have unit variance, we can simply rescale
To make each of our input features have unit variance, we can simply rescale
Line 91: Line 91:
When implementing PCA whitening or ZCA whitening in practice, sometimes some
When implementing PCA whitening or ZCA whitening in practice, sometimes some
of the eigenvalues <math>\textstyle \lambda_i</math> will be numerically close to 0, and thus the scaling
of the eigenvalues <math>\textstyle \lambda_i</math> will be numerically close to 0, and thus the scaling
-
step in the scaling equation from the 2D example above would involve dividing by a value close to zero, and may cause
+
step where we divide by <math>\sqrt{\lambda_i}</math> would involve dividing by a value close to zero; this
-
the data to blow up (take on large values) or otherwise be numerically unstable.  In practice, we  
+
may cause the data to blow up (take on large values) or otherwise be numerically unstable.  In practice, we  
-
implement the scaling step using  
+
therefore implement this scaling step using  
a small amount of regularization, and add a small constant <math>\textstyle \epsilon</math>  
a small amount of regularization, and add a small constant <math>\textstyle \epsilon</math>  
to the eigenvalues before taking their square root and inverse:
to the eigenvalues before taking their square root and inverse:
Line 100: Line 100:
\end{align}</math>
\end{align}</math>
When <math>\textstyle x</math> takes values around <math>\textstyle [-1,1]</math>, a value of <math>\textstyle \epsilon \approx 10^{-5}</math>
When <math>\textstyle x</math> takes values around <math>\textstyle [-1,1]</math>, a value of <math>\textstyle \epsilon \approx 10^{-5}</math>
-
might be typical. <!-- With this form of regularization, the features won't all -->
+
might be typical.  
For the case of images, adding <math>\textstyle \epsilon</math> here also has the effect of slightly smoothing (or low-pass
For the case of images, adding <math>\textstyle \epsilon</math> here also has the effect of slightly smoothing (or low-pass
Line 118: Line 118:
performed by ZCA.  This results in a less redundant representation of the input
performed by ZCA.  This results in a less redundant representation of the input
image, which is then transmitted to your brain.
image, which is then transmitted to your brain.
 +
 +
 +
 +
{{PCA}}
 +
 +
 +
{{Languages|白化|中文}}

Latest revision as of 13:20, 7 April 2013

Personal tools