Backpropagation Algorithm
From Ufldl
Line 131: | Line 131: | ||
To train our neural network, we can now repeatedly take steps of gradient descent to reduce our cost function <math>\textstyle J(W,b)</math>. | To train our neural network, we can now repeatedly take steps of gradient descent to reduce our cost function <math>\textstyle J(W,b)</math>. | ||
+ | |||
+ | |||
+ | {{Sparse_Autoencoder}} |