class ConjugateGradient extends Minimizer with Error
The ConjugateGradient
class implements the Polak-Ribiere Conjugate Gradient (PR-CG)
Algorithm for solving Non-Linear Programming (NLP) problems. PR-CG determines
a search direction as a weighted combination of the steepest descent direction
(-gradient) and the previous direction. The weighting is set by the beta function,
which for this implementation used the Polak-Ribiere technique.
dir_k = -gradient (x) + beta * dir_k-1
minimize f(x) subject to g(x) <= 0 [ optionally g(x) == 0 ]
- Alphabetic
- By Inheritance
- ConjugateGradient
- Error
- Minimizer
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Instance Constructors
-
new
ConjugateGradient(f: FunctionV2S, g: FunctionV2S = null, ineq: Boolean = true, exactLS: Boolean = true)
- f
the objective function to be minimized
- g
the constraint function to be satisfied, if any
- ineq
whether the constraint function must satisfy inequality or equality
- exactLS
whether to use exact (e.g.,
GoldenLS
) or inexact (e.g.,WolfeLS
) Line Search
Value Members
-
final
def
!=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
##(): Int
- Definition Classes
- AnyRef → Any
-
final
def
==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
val
EPSILON: Double
- Attributes
- protected
- Definition Classes
- Minimizer
-
val
MAX_ITER: Int
- Attributes
- protected
- Definition Classes
- Minimizer
-
val
STEP: Double
- Attributes
- protected
- Definition Classes
- Minimizer
-
val
TOL: Double
- Attributes
- protected
- Definition Classes
- Minimizer
-
final
def
asInstanceOf[T0]: T0
- Definition Classes
- Any
-
def
beta(gr1: VectorD, gr2: VectorD): Double
Compute the beta function using the Polak-Ribiere (PR) technique.
Compute the beta function using the Polak-Ribiere (PR) technique. The function determines how much of the prior direction is mixed in with -gradient.
- gr1
the gradient at the current point
- gr2
the gradient at the next point
-
def
clone(): AnyRef
- Attributes
- protected[java.lang]
- Definition Classes
- AnyRef
- Annotations
- @native() @throws( ... )
-
final
def
eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
def
equals(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
def
fg(x: VectorD): Double
The objective function f plus a weighted penalty based on the constraint function g.
The objective function f plus a weighted penalty based on the constraint function g.
- x
the coordinate values of the current point
- Definition Classes
- ConjugateGradient → Minimizer
-
def
finalize(): Unit
- Attributes
- protected[java.lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( classOf[java.lang.Throwable] )
-
final
def
flaw(method: String, message: String): Unit
- Definition Classes
- Error
-
final
def
getClass(): Class[_]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
def
hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
final
def
isInstanceOf[T0]: Boolean
- Definition Classes
- Any
-
def
lineSearch(x: VectorD, dir: VectorD, step: Double = STEP): Double
Perform an exact 'GoldenSectionLS' or inexact 'WolfeLS' line search.
Perform an exact 'GoldenSectionLS' or inexact 'WolfeLS' line search. Search in direction 'dir', returning the distance 'z' to move in that direction.
- x
the current point
- dir
the direction to move in
- step
the initial step size
- Definition Classes
- ConjugateGradient → Minimizer
-
final
def
ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
final
def
notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
final
def
notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
def
setDerivatives(partials: Array[FunctionV2S]): Unit
Set the partial derivative functions.
Set the partial derivative functions. If these functions are available, they are more efficient and more accurate than estimating the values using difference quotients (the default approach).
- partials
the array of partial derivative functions
-
def
solve(x0: VectorD, step: Double = STEP, toler: Double = EPSILON): VectorD
Solve the Non-Linear Programming (NLP) problem using the PR-CG algorithm.
Solve the Non-Linear Programming (NLP) problem using the PR-CG algorithm. To use explicit functions for gradient, replace 'gradient (fg, x)' with 'gradientD (df, x)'.
- x0
the starting point
- step
the initial step size
- toler
the tolerance
- Definition Classes
- ConjugateGradient → Minimizer
-
final
def
synchronized[T0](arg0: ⇒ T0): T0
- Definition Classes
- AnyRef
-
def
toString(): String
- Definition Classes
- AnyRef → Any
-
final
def
wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @native() @throws( ... )