GradientDescent_Adam

scalation.optimization.GradientDescent_Adam

The GradientDescent_Adam class provides functions to optimize the parameters (weights and biases) of Neural Networks with various numbers of layers. This optimizer implements an ADAptive Moment estimation (Adam) Optimizer.

Value parameters

f

the vector-to-scalar (V2S) objective/loss function

grad

the vector-to-vector (V2V) gradient function, grad f

hparam

the hyper-parameters

Attributes

See also
Graph
Supertypes
trait StoppingRule
trait Minimize
class Object
trait Matchable
class Any

Members list

Value members

Concrete methods

def solve(x0: VectorD, α: Double): FuncVec

Solve the Non-Linear Programming (NLP) problem by starting at x0 and iteratively moving down in the search space to a minimal point. Return the optimal point/vector x and its objective function value.

Solve the Non-Linear Programming (NLP) problem by starting at x0 and iteratively moving down in the search space to a minimal point. Return the optimal point/vector x and its objective function value.

Value parameters

x0

the starting point

α

the step-size/learning rate

Attributes

See also

Inherited methods

Return the best solution found.

Return the best solution found.

Attributes

Inherited from:
StoppingRule
def stopWhen(loss: Double, x: VectorD): FuncVec

Stop when too many steps have the loss function (e.g., sse) increasing. Signal a stopping condition by returning the best parameter vector, else null.

Stop when too many steps have the loss function (e.g., sse) increasing. Signal a stopping condition by returning the best parameter vector, else null.

Value parameters

loss

the current value of the loss function (e.g., sum of squared errors)

x

the current value of the parameter vector

Attributes

Inherited from:
StoppingRule