Neural Networks
From Ufldl
for
Neural Networks
Jump to:
navigation
,
search
Consider a supervised learning problem where we have access to labeled training examples <math>(x^{(i)}, y^{(i)})</math>. Neural networks give a way of defining a complex, non-linear form of hypotheses <math>h_{W,b}(x)</math>, with parameters <math>W,b</math> that we can fit to our data. To describe neural networks, we will begin by describing the simplest possible neural network, one which comprises a single "neuron." We will use the following diagram to denote a single neuron: [[Image:SingleNeuron.png|400px|center]] This "neuron" is a computational unit that takes as input <math>x_1, x_2, x_3</math> (and a +1 intercept term), and outputs <math>h_{W,b}(x) = f(W^Tx) = f(\sum_{i=1}^3 W_{i}x_i +b)</math>, where <math>f : \Re \mapsto \Re</math> is called the '''activation function'''. In these notes, we will choose <math>f(\cdot)</math> to be the sigmoid function: :<math> f(z) = \frac{1}{1+\exp(-z)}. </math> Thus, our single neuron corresponds exactly to the input-output mapping defined by logistic regression. Although these notes will use the sigmoid function, it is worth noting that another common choice for <math>f</math> is the hyperbolic tangent, or tanh, function: :<math> f(z) = \tanh(z) = \frac{e^z - e^{-z}}{e^z + e^{-z}}, </math> Here are plots of the sigmoid and <math>\tanh</math> functions:
Template:Languages
(
view source
)
Template:Sparse Autoencoder
(
view source
)
Return to
Neural Networks
.
Views
Page
Discussion
View source
History
Personal tools
Log in
ufldl resources
UFLDL Tutorial
Recommended Readings
wiki
Main page
Recent changes
Random page
Help
Search
Toolbox
What links here
Related changes
Special pages