反向传导算法

From Ufldl

Jump to: navigation, search
Line 1: Line 1:
 +
翻译者:王方,email:fangkey@gmail.com,新浪微博:@GuitarFang
 +
校对者:林锋,email:  xlfg@yeah.net, 新浪微博:@大黄蜂的思索
 +
:【原文】:
:【原文】:
Suppose we have a fixed training set <math>\{ (x^{(1)}, y^{(1)}), \ldots, (x^{(m)}, y^{(m)}) \}</math> of <math>m</math> training examples. We can train our neural network using batch gradient descent.  In detail, for a single training example <math>(x,y)</math>, we define the cost function with respect to that single example to be:
Suppose we have a fixed training set <math>\{ (x^{(1)}, y^{(1)}), \ldots, (x^{(m)}, y^{(m)}) \}</math> of <math>m</math> training examples. We can train our neural network using batch gradient descent.  In detail, for a single training example <math>(x,y)</math>, we define the cost function with respect to that single example to be:

Revision as of 18:01, 7 March 2013

Personal tools