Return the fit, the weigth vector 'w'.
Show the flaw by printing the error message.
Show the flaw by printing the error message.
the method where the error occurred
the error message
Minimize the error in the prediction by adjusting the weight vector 'w'.
Minimize the error in the prediction by adjusting the weight vector 'w'. The error 'e' is simply the difference between the target value 'y' and the predicted value 'z'. Mininize 1/2 of the dot product of error with itself using gradient-descent. The gradient is '-x.t * (e * z * (_1 - z))', so move in the opposite direction of the gradient.
Given several new input vectors stored as rows in a matrix 'zi', predict all output/response vector 'zo'
Given several new input vectors stored as rows in a matrix 'zi', predict all output/response vector 'zo'
the matrix containing row vectors to use for prediction
Given a new input vector 'zi', predict the output/response value 'zo'.
Given a new input vector 'zi', predict the output/response value 'zo'.
the new input vector
Given a new discrete data vector z, predict the y-value of f(z).
Given a new discrete data vector z, predict the y-value of f(z).
the vector to use for prediction
Set the initial weight vector 'w' with values in (0, 1) before training.
Set the initial weight vector 'w' with values in (0, 1) before training.
the random number stream to use
Set the initial weight matrix w manually before training.
Set the initial weight matrix w manually before training.
the initial weights for w
Given training data x and y, fit the weight vector w.
Given training data x and y, fit the weight vector w.
The
Perceptron
class supports single-valued 2-layer (input and output) Neural-Networks. Given several input vectors and output values (training data), fit the weights 'w' connecting the layers, so that for a new input vector 'zi', the net can predict the output value 'zo', i.e., 'zi --> zo = f (w dot zi)'. Note, w0 is treated as the bias, so x0 must be 1.0.