GradientDescent_Mo2

scalation.optimization.GradientDescent_Mo2

The GradientDescent_Mo2 class provides functions to optimize the parameters (weights and biases) of Neural Networks with various numbers of layers. This optimizer implements a Gradient Descent with Momentum Optimizer.

Value parameters

f

the vector-to-scalar objective function

gr

the vector-to-gradient function

Attributes

See also
Graph
Supertypes
trait StoppingRule
trait Minimize
class Object
trait Matchable
class Any

Members list

Value members

Concrete methods

def solve(x0: VectorD, α: Double): FuncVec

Solve the Non-Linear Programming (NLP) problem by starting at x0 and iteratively moving down in the search space to a minimal point. Return the optimal point/vector x and its objective function value.

Solve the Non-Linear Programming (NLP) problem by starting at x0 and iteratively moving down in the search space to a minimal point. Return the optimal point/vector x and its objective function value.

Value parameters

x0

the starting point

α

the initial step size

Attributes

See also

Inherited methods

Return the best solution found.

Return the best solution found.

Attributes

Inherited from:
StoppingRule
def stopWhen(loss: Double, x: VectorD): FuncVec

Stop when too many steps have the loss function (e.g., sse) increasing. Signal a stopping condition by returning the best parameter vector, else null.

Stop when too many steps have the loss function (e.g., sse) increasing. Signal a stopping condition by returning the best parameter vector, else null.

Value parameters

loss

the current value of the loss function (e.g., sum of squared errors)

x

the current value of the parameter vector

Attributes

Inherited from:
StoppingRule