The GradientDescent_Adam class provides functions to optimize the parameters (weights and biases) of Neural Networks with various numbers of layers. This optimizer implements an ADAptive Moment estimation (Adam) Optimizer.
Value parameters
f
the vector-to-scalar (V2S) objective/loss function
grad
the vector-to-vector (V2V) gradient function, grad f
Solve the Non-Linear Programming (NLP) problem by starting at x0 and iteratively moving down in the search space to a minimal point. Return the optimal point/vector x and its objective function value.
Solve the Non-Linear Programming (NLP) problem by starting at x0 and iteratively moving down in the search space to a minimal point. Return the optimal point/vector x and its objective function value.