Minimizer

scalation.optimization.Minimizer
See theMinimizer companion object
trait Minimizer

The Minimizer trait sets the pattern for optimization algorithms for solving Non-Linear Programming (NLP) problems of the form:

minimize f(x) subject to g(x) <= 0 [ optionally g(x) == 0 ]

where f is the objective function to be minimized g is the constraint function to be satisfied, if any

Classes mixing in this trait must implement a function fg that rolls the constraints into the objective functions as penalties for constraint violation, a one-dimensional Line Search (LS) algorithm lineSearch and an iterative method (solve) that searches for improved solutions x-vectors with lower objective function values f(x).

Attributes

Companion
object
Graph
Supertypes
class Object
trait Matchable
class Any
Known subtypes
class BFGS
class LBFGS_B
class GridSearch
class SPSA
Show all

Members list

Value members

Abstract methods

def lineSearch(x: VectorD, dir: VectorD, step: Double): Double

Perform an exact, e.g., GoldenSectionLS or inexact, e.g., WolfeLS line search. Search in direction dir, returning the distance z to move in that direction.

Perform an exact, e.g., GoldenSectionLS or inexact, e.g., WolfeLS line search. Search in direction dir, returning the distance z to move in that direction.

Value parameters

dir

the direction to move in

step

the initial step size

x

the current point

Attributes

def solve(x0: VectorD, step: Double, toler: Double): FuncVec

Solve the Non-Linear Programming (NLP) problem by starting at x0 and iteratively moving down in the search space to a minimal point. Return the optimal point/vector x and its objective function value.

Solve the Non-Linear Programming (NLP) problem by starting at x0 and iteratively moving down in the search space to a minimal point. Return the optimal point/vector x and its objective function value.

Value parameters

step

the initial step size

toler

the tolerance

x0

the starting point

Attributes

Concrete methods

def fg(x: VectorD): Double

The objective function f plus a weighted penalty based on the constraint function g. Override for constrained optimization and ignore for unconstrained optimization.

The objective function f plus a weighted penalty based on the constraint function g. Override for constrained optimization and ignore for unconstrained optimization.

Value parameters

x

the coordinate values of the current point

Attributes

def resolve(n: Int, step_: Double, toler: Double): FuncVec

Solve the following Non-Linear Programming (NLP) problem: min { f(x) | g(x) <= 0 }. To use explicit functions for gradient, replace gradient (fg, x._1 + s) with gradientD (df, x._1 + s). This method uses multiple random restarts.

Solve the following Non-Linear Programming (NLP) problem: min { f(x) | g(x) <= 0 }. To use explicit functions for gradient, replace gradient (fg, x._1 + s) with gradientD (df, x._1 + s). This method uses multiple random restarts.

Value parameters

n

the dimensionality of the search space

step_

the initial step size

toler

the tolerance

Attributes