CoordinateDescent

scalation.optimization.CoordinateDescent
class CoordinateDescent(f: FunctionV2S, exactLS: Boolean) extends Minimizer

The CoordinateDescent class solves unconstrained Non-Linear Programming (NLP) problems using the Coordinate Descent algorithm. Given a function f and a starting point x0, the algorithm picks coordinate directions (cyclically) and takes steps in the those directions. The algorithm iterates until it converges.

dir_k = kth coordinate direction

min f(x)

Value parameters

exactLS

whether to use exact (e.g., GoldenLS) or inexact (e.g., WolfeLS) Line Search

f

the vector-to-scalar objective function

Attributes

Graph
Supertypes
trait Minimizer
class Object
trait Matchable
class Any

Members list

Value members

Concrete methods

def lineSearch(x: VectorD, dir: VectorD, step: Double): Double

Perform an exact GoldenSectionLS or inexact WolfeLS line search. Search in direction dir, returning the distance z to move in that direction.

Perform an exact GoldenSectionLS or inexact WolfeLS line search. Search in direction dir, returning the distance z to move in that direction.

Value parameters

dir

the direction to move in

step

the initial step size

x

the current point

Attributes

def solve(x0: VectorD, step: Double, toler: Double): FuncVec

Solve the Non-Linear Programming (NLP) problem using the Coordinate Descent algorithm.

Solve the Non-Linear Programming (NLP) problem using the Coordinate Descent algorithm.

Value parameters

step

the initial step size

toler

the tolerance

x0

the starting point

Attributes

Inherited methods

def fg(x: VectorD): Double

The objective function f plus a weighted penalty based on the constraint function g. Override for constrained optimization and ignore for unconstrained optimization.

The objective function f plus a weighted penalty based on the constraint function g. Override for constrained optimization and ignore for unconstrained optimization.

Value parameters

x

the coordinate values of the current point

Attributes

Inherited from:
Minimizer
def resolve(n: Int, step_: Double, toler: Double): FuncVec

Solve the following Non-Linear Programming (NLP) problem: min { f(x) | g(x) <= 0 }. To use explicit functions for gradient, replace gradient (fg, x._1 + s) with gradientD (df, x._1 + s). This method uses multiple random restarts.

Solve the following Non-Linear Programming (NLP) problem: min { f(x) | g(x) <= 0 }. To use explicit functions for gradient, replace gradient (fg, x._1 + s) with gradientD (df, x._1 + s). This method uses multiple random restarts.

Value parameters

n

the dimensionality of the search space

step_

the initial step size

toler

the tolerance

Attributes

Inherited from:
Minimizer