Packages

c

scalation.maxima

ConjGradient

class ConjGradient extends Error

The ConjGradient implements the Polak-Ribiere Conjugate Gradient (PR-CG) Algorithm for solving Non-Linear Programming (NLP) problems. PR-CG determines a search direction as a weighted combination of the steepest descent direction (-gradient) and the previous direction. The weighting is set by the beta function, which for this implementation used the Polak-Ribiere technique.

dir_k = -gradient (x) + beta * dir_k-1

maximize f(x) subject to g(x) <= 0 [ optionally g(x) == 0 ]

Linear Supertypes
Error, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. ConjGradient
  2. Error
  3. AnyRef
  4. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. Protected

Instance Constructors

  1. new ConjGradient(f: FunctionV2S, g: FunctionV2S = null, ineq: Boolean = true)

    f

    the objective function to be maximized

    g

    the constraint function to be satisfied, if any

    ineq

    whether the constraint function must satisfy inequality or equality

Type Members

  1. type Pair = (VectorD, VectorD)

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##: Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  5. def beta(gr1: VectorD, gr2: VectorD): Double

    Compute the beta function using the Polak-Ribiere (PR) technique.

    Compute the beta function using the Polak-Ribiere (PR) technique. The function determines how much of the prior direction is mixed in with -gradient.

    gr1

    the gradient at the current point

    gr2

    the gradient at the next point

  6. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.CloneNotSupportedException]) @native() @HotSpotIntrinsicCandidate()
  7. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  8. def equals(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef → Any
  9. def fg(x: VectorD): Double

    The objective function f re-scaled by a weighted penalty, if constrained.

    The objective function f re-scaled by a weighted penalty, if constrained.

    x

    the coordinate values of the current point

  10. final def flaw(method: String, message: String): Unit
    Definition Classes
    Error
  11. final def getClass(): Class[_ <: AnyRef]
    Definition Classes
    AnyRef → Any
    Annotations
    @native() @HotSpotIntrinsicCandidate()
  12. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native() @HotSpotIntrinsicCandidate()
  13. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  14. def lineSearch(x: VectorD, dir: VectorD): Double

    Perform an inexact (e.g., 'WolfeLS' or exact (e.g., 'GoldenSectionLS' line search in the direction 'dir', returning the distance 'z' to move in that direction.

    Perform an inexact (e.g., 'WolfeLS' or exact (e.g., 'GoldenSectionLS' line search in the direction 'dir', returning the distance 'z' to move in that direction.

    x

    the current point

    dir

    the direction to move in

  15. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  16. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native() @HotSpotIntrinsicCandidate()
  17. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native() @HotSpotIntrinsicCandidate()
  18. def setDerivatives(partials: Array[FunctionV2S]): Unit

    Set the partial derivative functions.

    Set the partial derivative functions. If these functions are available, they are more efficient and more accurate than estimating the values using difference quotients (the default approach).

    partials

    the array of partial derivative functions

  19. def setSteepest(): Unit

    Use the Steepest-Descent algorithm rather than the default PR-CG algorithm.

  20. def solve(x0: VectorD): VectorD

    Solve the following Non-Linear Programming (NLP) problem using PR-CG: max { f(x) | g(x) <= 0 }.

    Solve the following Non-Linear Programming (NLP) problem using PR-CG: max { f(x) | g(x) <= 0 }. To use explicit functions for gradient, replace 'gradient (fg, x._1 + s)' with 'gradientD (df, x._1 + s)'.

    x0

    the starting point

  21. final def synchronized[T0](arg0: => T0): T0
    Definition Classes
    AnyRef
  22. def toString(): String
    Definition Classes
    AnyRef → Any
  23. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])
  24. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException]) @native()
  25. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])

Deprecated Value Members

  1. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.Throwable]) @Deprecated
    Deprecated

Inherited from Error

Inherited from AnyRef

Inherited from Any

Ungrouped