Backpropagation Algorithm

From Ufldl

Jump to: navigation, search
Line 131: Line 131:
To train our neural network, we can now repeatedly take steps of gradient descent to reduce our cost function <math>\textstyle J(W,b)</math>.
To train our neural network, we can now repeatedly take steps of gradient descent to reduce our cost function <math>\textstyle J(W,b)</math>.
 +
 +
 +
{{Sparse_Autoencoder}}

Revision as of 10:49, 26 May 2011

Personal tools