The BFGS
the class implements the Broyden–Fletcher–Goldfarb–Shanno (BFGS) Quasi-Newton Algorithm for solving Non-Linear Programming (NLP) problems. BFGS determines a search direction by deflecting the steepest descent direction vector (opposite the gradient) by multiplying it by a matrix that approximates the inverse Hessian. Note, this implementation may be set up to work with the matrix b (approximate Hessian) or directly with the aHi matrix (the inverse of b).
minimize f(x) subject to g(x) <= 0 [ optionally g(x) == 0 ]
Value parameters
- exactLS
-
whether to use exact (e.g.,
GoldenLS
) or inexact (e.g.,WolfeLS
) Line Search - f
-
the objective function to be minimized
- g
-
the constraint function to be satisfied, if any
- ineq
-
whether the constraint is treated as inequality (default) or equality
Attributes
- Companion
- object
- Graph
-
- Supertypes
Members list
Value members
Concrete methods
The objective function f plus a weighted penalty based on the constraint function g.
The objective function f plus a weighted penalty based on the constraint function g.
Value parameters
- x
-
the coordinate values of the current point
Attributes
- Definition Classes
Perform an exact GoldenSectionLS or inexact WolfeLS Line Search. Search in direction dir, returning the distance z to move in that direction. Default to
Perform an exact GoldenSectionLS or inexact WolfeLS Line Search. Search in direction dir, returning the distance z to move in that direction. Default to
Value parameters
- dir
-
the direction to move in
- step
-
the initial step size
- x
-
the current point
Attributes
Set the partial derivative functions. If these functions are available, they are more efficient and more accurate than estimating the values using difference quotients (the default approach).
Set the partial derivative functions. If these functions are available, they are more efficient and more accurate than estimating the values using difference quotients (the default approach).
Value parameters
- grad
-
the gradient as explicit functions for partials
Attributes
Use the Gradient Descent algorithm rather than the default BFGS algorithm.
Use the Gradient Descent algorithm rather than the default BFGS algorithm.
Attributes
Solve the following Non-Linear Programming (NLP) problem using BFGS: min { f(x) | g(x) <= 0 }. It computes numerical gradients. To use explicit functions for gradient, add 'grad' parameter to replace '∇ (fg)(xn)'.
Solve the following Non-Linear Programming (NLP) problem using BFGS: min { f(x) | g(x) <= 0 }. It computes numerical gradients. To use explicit functions for gradient, add 'grad' parameter to replace '∇ (fg)(xn)'.
Value parameters
- step_
-
the initial step size
- toler
-
the tolerance
- x0
-
the starting point
Attributes
Solve for an optima by finding a local optima close to the starting point/guess 'x0'. This version uses explicit functions for the gradient (partial derivatives).
Solve for an optima by finding a local optima close to the starting point/guess 'x0'. This version uses explicit functions for the gradient (partial derivatives).
Value parameters
- grad
-
the gradient as explicit functions for partials
- step_
-
the initial step size
- toler
-
the tolerance
- x0
-
the starting point/guess
Attributes
Solve for an optima by finding a local optima close to the starting point/guess 'x0'. It computes numerical gradients. To use explicit functions for gradient, add 'grad' parameter to replace '∇ (fg)(xn)'. This version uses a specified line search algorithm implementations developed for L-BFGS (except for BacktrackingOrthantWise
, which is currently NOT supported). More details can be found in LBFGSLineSearchAlg
.
Solve for an optima by finding a local optima close to the starting point/guess 'x0'. It computes numerical gradients. To use explicit functions for gradient, add 'grad' parameter to replace '∇ (fg)(xn)'. This version uses a specified line search algorithm implementations developed for L-BFGS (except for BacktrackingOrthantWise
, which is currently NOT supported). More details can be found in LBFGSLineSearchAlg
.
Value parameters
- lSearchAlg
-
LBFGSLineSearchAlg
representing the line search algorithm chosen for the optimization. Cannot beBacktrackingOrthantWise
, is not currently supported. - lSearchPrms
-
LBFGSLineSearchPrms
representing parameters that control line search execution. - step_
-
the initial step size
- toler
-
the tolerance
- x0
-
the starting point/guess
Attributes
Solve for an optima by finding a local optima close to the starting point/guess 'x0'. This version uses explicit functions for the gradient (partials derivatives) and a specified line search algorithm implementations developed for L-BFGS (except for BacktrackingOrthantWise
, which is currently NOT supported). More details can be found in LBFGSLineSearchAlg
.
Solve for an optima by finding a local optima close to the starting point/guess 'x0'. This version uses explicit functions for the gradient (partials derivatives) and a specified line search algorithm implementations developed for L-BFGS (except for BacktrackingOrthantWise
, which is currently NOT supported). More details can be found in LBFGSLineSearchAlg
.
Value parameters
- grad
-
the gradient as explicit functions for partials
- lSearchAlg
-
LBFGSLineSearchAlg
representing the line search algorithm chosen for the optimization. Cannot beBacktrackingOrthantWise
, is not currently supported. - lSearchPrms
-
LBFGSLineSearchPrms
representing parameters that control line search execution. - step_
-
the initial step size
- toler
-
the tolerance
- x0
-
the starting point/guess
Attributes
Inherited methods
Adds a new multidimensional point to the path being monitored.
Adds a new multidimensional point to the path being monitored.
Value parameters
- x
-
the data point to be added to the path being monitored.
Attributes
- Inherited from:
- PathMonitor
Clears the current path being monitored.
Returns a deep copy of the data path being monitored.
Returns a deep copy of the data path being monitored.
Attributes
- Returns
-
ArrayBuffern [VectorD], a deep copy of the data path being monitored.
- Inherited from:
- PathMonitor
Solve the following Non-Linear Programming (NLP) problem: min { f(x) | g(x) <= 0 }. To use explicit functions for gradient, replace gradient (fg, x._1 + s) with gradientD (df, x._1 + s). This method uses multiple random restarts.
Solve the following Non-Linear Programming (NLP) problem: min { f(x) | g(x) <= 0 }. To use explicit functions for gradient, replace gradient (fg, x._1 + s) with gradientD (df, x._1 + s). This method uses multiple random restarts.
Value parameters
- n
-
the dimensionality of the search space
- step_
-
the initial step size
- toler
-
the tolerance
Attributes
- Inherited from:
- Minimizer