scalation.minima

ConjGradient

Related Doc: package minima

class ConjGradient extends Minimizer with Error

Polak-Ribiere Conjugate Gradient (PR-CG) Algorithm for solving Non-Linear Programming (NLP) problems. PR-CG determines a search direction as a weighted combination of the steepest descent direction (-gradient) and the previous direction. The weighting is set by the beta function, which for this implementation used the Polak-Ribiere technique.

dir_k = -gradient (x) + beta * dir_k-1

minimize f(x) subject to g(x) <= 0 [ optionally g(x) == 0 ]

Linear Supertypes
Error, Minimizer, AnyRef, Any
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. ConjGradient
  2. Error
  3. Minimizer
  4. AnyRef
  5. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Instance Constructors

  1. new ConjGradient(f: FunctionV2S, g: FunctionV2S = null, ineq: Boolean = true, exactLS: Boolean = true)

    f

    the objective function to be minimized

    g

    the constraint function to be satisfied, if any

    ineq

    whether the constraint function must satisfy inequality or equality

    exactLS

    whether to use exact (e.g., GoldenLS) or inexact (e.g., WolfeLS) Line Search

Value Members

  1. final def !=(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  4. val EPSILON: Double

    Attributes
    protected
    Definition Classes
    Minimizer
  5. val MAX_ITER: Int

    Attributes
    protected
    Definition Classes
    Minimizer
  6. val STEP: Double

    Attributes
    protected
    Definition Classes
    Minimizer
  7. val TOL: Double

    Attributes
    protected
    Definition Classes
    Minimizer
  8. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  9. def beta(gr1: VectorD, gr2: VectorD): Double

    Compute the beta function using the Polak-Ribiere (PR) technique.

    Compute the beta function using the Polak-Ribiere (PR) technique. The function determines how much of the prior direction is mixed in with -gradient.

    gr1

    the gradient at the current point

    gr2

    the gradient at the next point

  10. def clone(): AnyRef

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  11. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  12. def equals(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  13. def fg(x: VectorD): Double

    The objective function f plus a weighted penalty based on the constraint function g.

    The objective function f plus a weighted penalty based on the constraint function g.

    x

    the coordinate values of the current point

    Definition Classes
    ConjGradientMinimizer
  14. def finalize(): Unit

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  15. def flaw(method: String, message: String): Unit

    Show the flaw by printing the error message.

    Show the flaw by printing the error message.

    method

    the method where the error occurred

    message

    the error message

    Definition Classes
    Error
  16. final def getClass(): Class[_]

    Definition Classes
    AnyRef → Any
  17. def hashCode(): Int

    Definition Classes
    AnyRef → Any
  18. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  19. def lineSearch(x: VectorD, dir: VectorD, step: Double = STEP): Double

    Perform an exact (GoldenSectionLS) or inexact (WolfeLS) line search.

    Perform an exact (GoldenSectionLS) or inexact (WolfeLS) line search. Search in direction 'dir', returning the distance 'z' to move in that direction.

    x

    the current point

    dir

    the direction to move in

    step

    the initial step size

    Definition Classes
    ConjGradientMinimizer
  20. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  21. final def notify(): Unit

    Definition Classes
    AnyRef
  22. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  23. def setDerivatives(partials: Array[FunctionV2S]): Unit

    Set the partial derivative functions.

    Set the partial derivative functions. If these functions are available, they are more efficient and more accurate than estimating the values using difference quotients (the default approach).

    partials

    the array of partial derivative functions

  24. def solve(x0: VectorD, step: Double = STEP, toler: Double = EPSILON): VectorD

    Solve the Non-Linear Programming (NLP) problem using the PR-CG algorithm.

    Solve the Non-Linear Programming (NLP) problem using the PR-CG algorithm. To use explicit functions for gradient, replace 'gradient (fg, x)' with 'gradientD (df, x)'.

    x0

    the starting point

    step

    the initial step size

    toler

    the tolerence

    Definition Classes
    ConjGradientMinimizer
  25. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  26. def toString(): String

    Definition Classes
    AnyRef → Any
  27. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  28. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  29. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from Error

Inherited from Minimizer

Inherited from AnyRef

Inherited from Any

Ungrouped