The GradientDescent class solves unconstrained Non-Linear Programming (NLP) problems using the Gradient Descent algorithm. Given a function f and a starting point x0, the algorithm computes the gradient and takes steps in the opposite direction. The algorithm iterates until it converges. The class assumes that partial derivative functions are not available unless explicitly given via the setDerivatives method.
dir_k = -gradient (x)
minimize f(x)
Value parameters
exactLS
whether to use exact (e.g., GoldenLS) or inexact (e.g., WolfeLS) Line Search
Set the partial derivative functions. If these functions are available, they are more efficient and more accurate than estimating the values using difference quotients (the default approach).
Set the partial derivative functions. If these functions are available, they are more efficient and more accurate than estimating the values using difference quotients (the default approach).
The objective function f plus a weighted penalty based on the constraint function g. Override for constrained optimization and ignore for unconstrained optimization.
The objective function f plus a weighted penalty based on the constraint function g. Override for constrained optimization and ignore for unconstrained optimization.
Solve the following Non-Linear Programming (NLP) problem: min { f(x) | g(x) <= 0 }. To use explicit functions for gradient, replace gradient (fg, x._1 + s) with gradientD (df, x._1 + s). This method uses multiple random restarts.
Solve the following Non-Linear Programming (NLP) problem: min { f(x) | g(x) <= 0 }. To use explicit functions for gradient, replace gradient (fg, x._1 + s) with gradientD (df, x._1 + s). This method uses multiple random restarts.