class QuasiNewton extends Minimizer with Error
The QuasiNewton
the class implements the Broyden–Fletcher–Goldfarb–Shanno (BFGS)
Quasi-Newton Algorithm for solving Non-Linear Programming (NLP) problems.
BFGS determines a search direction by deflecting the steepest descent direction
vector (opposite the gradient) by * multiplying it by a matrix that approximates
the inverse Hessian. Note, this implementation may be set up to work with the matrix
'b' (approximate Hessian) or directly with the 'binv' matrix (the inverse of 'b').
minimize f(x) subject to g(x) <= 0 [ optionally g(x) == 0 ]
- Alphabetic
- By Inheritance
- QuasiNewton
- Error
- Minimizer
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Instance Constructors
- new QuasiNewton(f: FunctionV2S, g: FunctionV2S = null, ineq: Boolean = true, exactLS: Boolean = false)
- f
the objective function to be minimized
- g
the constraint function to be satisfied, if any
- ineq
whether the constraint is treated as inequality (default) or equality
- exactLS
whether to use exact (e.g.,
GoldenLS
) or inexact (e.g.,WolfeLS
) Line Search
Value Members
- final def !=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def ##: Int
- Definition Classes
- AnyRef → Any
- final def ==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- val EPSILON: Double
- Attributes
- protected
- Definition Classes
- Minimizer
- val MAX_ITER: Int
- Attributes
- protected
- Definition Classes
- Minimizer
- val STEP: Double
- Attributes
- protected
- Definition Classes
- Minimizer
- val TOL: Double
- Attributes
- protected
- Definition Classes
- Minimizer
- final def asInstanceOf[T0]: T0
- Definition Classes
- Any
- def clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @native() @HotSpotIntrinsicCandidate()
- final def eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- def equals(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef → Any
- def fg(x: VectorD): Double
The objective function f plus a weighted penalty based on the constraint function g.
The objective function f plus a weighted penalty based on the constraint function g.
- x
the coordinate values of the current point
- Definition Classes
- QuasiNewton → Minimizer
- final def flaw(method: String, message: String): Unit
- Definition Classes
- Error
- final def getClass(): Class[_ <: AnyRef]
- Definition Classes
- AnyRef → Any
- Annotations
- @native() @HotSpotIntrinsicCandidate()
- def hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native() @HotSpotIntrinsicCandidate()
- final def isInstanceOf[T0]: Boolean
- Definition Classes
- Any
- def lineSearch(x: VectorD, dir: VectorD, step: Double = STEP): Double
Perform an exact 'GoldenSectionLS' or inexact 'WolfeLS' Line Search.
Perform an exact 'GoldenSectionLS' or inexact 'WolfeLS' Line Search. Search in direction 'dir', returning the distance 'z' to move in that direction. Default to
- x
the current point
- dir
the direction to move in
- step
the initial step size
- Definition Classes
- QuasiNewton → Minimizer
- final def ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- final def notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native() @HotSpotIntrinsicCandidate()
- final def notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native() @HotSpotIntrinsicCandidate()
- def setDerivatives(partials: Array[FunctionV2S]): Unit
Set the partial derivative functions.
Set the partial derivative functions. If these functions are available, they are more efficient and more accurate than estimating the values using difference quotients (the default approach).
- partials
the array of partial derivative functions
- def setSteepest(): Unit
Use the Gradient Descent algorithm rather than the default BFGS algorithm.
- def solve(x0: VectorD, step_: Double = STEP, toler: Double = TOL): VectorD
Solve the following Non-Linear Programming (NLP) problem using BFGS: min { f(x) | g(x) <= 0 }.
Solve the following Non-Linear Programming (NLP) problem using BFGS: min { f(x) | g(x) <= 0 }. To use explicit functions for gradient, replace 'gradient (fg, x._1 + s)' with 'gradientD (df, x._1 + s)'.
- x0
the starting point
- step_
the initial step size
- toler
the tolerance
- Definition Classes
- QuasiNewton → Minimizer
- final def synchronized[T0](arg0: => T0): T0
- Definition Classes
- AnyRef
- def toString(): String
- Definition Classes
- AnyRef → Any
- def updateBinv(s: VectorD, y: VectorD): Unit
Update the 'binv' matrix, which is used to deflect -gradient to a better search direction than steepest descent (-gradient).
Update the 'binv' matrix, which is used to deflect -gradient to a better search direction than steepest descent (-gradient). Compute the 'binv' matrix directly using the Sherman–Morrison formula.
- s
the step vector (next point - current point)
- y
the difference in the gradients (next - current)
- See also
http://en.wikipedia.org/wiki/BFGS_method
- final def wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()
- final def wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
Deprecated Value Members
- def finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.Throwable]) @Deprecated
- Deprecated