ConjugateGradient_NoLS

scalation.optimization.ConjugateGradient_NoLS

The ConjugateGradient_NoLS class implements the Polak-Ribiere Conjugate Gradient (PR-CG) Algorithm for solving Non-Linear Programming (NLP) problems. PR-CG determines a search direction as a weighted combination of the steepest descent direction (-gradient) and the previous direction. The weighting is set by the beta function, which for this implementation used the Polak-Ribiere technique.

dir_k = - grad (x) + beta * dir_k-1

min f(x)    where f: R^n -> R

This version does not use a line search algorithm (_NoLS)

Value parameters

f

the objective function to be minimized

Attributes

See also

ConjugateGradient for one that uses line search.

Graph
Supertypes
trait Minimize
class Object
trait Matchable
class Any

Members list

Value members

Concrete methods

def solve(x0: VectorD, α: Double): FuncVec

Solve the Non-Linear Programming (NLP) problem using the PR-CG algorithm. To use explicit functions for gradient, replace ∇(f, x) with gr(x).

Solve the Non-Linear Programming (NLP) problem using the PR-CG algorithm. To use explicit functions for gradient, replace ∇(f, x) with gr(x).

Value parameters

x0

the starting point/guess

α

the current learning rate

Attributes

def solve2(x0: VectorD, grad: FunctionV2V, α: Double): FuncVec

Solve for an optima by finding a local optima close to the starting point/guess 'x0'. This version uses explicit functions for the gradient (partials derivatives)

Solve for an optima by finding a local optima close to the starting point/guess 'x0'. This version uses explicit functions for the gradient (partials derivatives)

Value parameters

grad

the gradient as explicit functions for partials

x0

the starting point/guess

α

the current learning rate

Attributes