GradientDescent

scalation.optimization.GradientDescent
class GradientDescent(f: FunctionV2S, exactLS: Boolean) extends Minimizer

The GradientDescent class solves unconstrained Non-Linear Programming (NLP) problems using the Gradient Descent algorithm. Given a function f and a starting point x0, the algorithm computes the gradient and takes steps in the opposite direction. The algorithm iterates until it converges. The class assumes that partial derivative functions are not available unless explicitly given via the setDerivatives method.

dir_k = -gradient (x)

minimize f(x)

Value parameters

exactLS

whether to use exact (e.g., GoldenLS) or inexact (e.g., WolfeLS) Line Search

f

the vector-to-scalar objective function

Attributes

Graph
Supertypes
trait Minimizer
class Object
trait Matchable
class Any

Members list

Value members

Concrete methods

def lineSearch(x: VectorD, dir: VectorD, step: Double): Double

Perform an exact GoldenSectionLS or inexact WolfeLS line search. Search in direction 'dir', returning the distance 'z' to move in that direction.

Perform an exact GoldenSectionLS or inexact WolfeLS line search. Search in direction 'dir', returning the distance 'z' to move in that direction.

Value parameters

dir

the direction to move in

step

the initial step size

x

the current point

Attributes

def setDerivatives(partials: FunctionV2V): Unit

Set the partial derivative functions. If these functions are available, they are more efficient and more accurate than estimating the values using difference quotients (the default approach).

Set the partial derivative functions. If these functions are available, they are more efficient and more accurate than estimating the values using difference quotients (the default approach).

Value parameters

partials

the vector of partial derivatives

Attributes

def solve(x0: VectorD, step: Double, toler: Double): FuncVec

Solve the Non-Linear Programming (NLP) problem using the Gradient Descent algorithm.

Solve the Non-Linear Programming (NLP) problem using the Gradient Descent algorithm.

Value parameters

step

the initial step size

toler

the tolerance

x0

the starting point

Attributes

Inherited methods

def fg(x: VectorD): Double

The objective function f plus a weighted penalty based on the constraint function g. Override for constrained optimization and ignore for unconstrained optimization.

The objective function f plus a weighted penalty based on the constraint function g. Override for constrained optimization and ignore for unconstrained optimization.

Value parameters

x

the coordinate values of the current point

Attributes

Inherited from:
Minimizer
def resolve(n: Int, step_: Double, toler: Double): FuncVec

Solve the following Non-Linear Programming (NLP) problem: min { f(x) | g(x) <= 0 }. To use explicit functions for gradient, replace gradient (fg, x._1 + s) with gradientD (df, x._1 + s). This method uses multiple random restarts.

Solve the following Non-Linear Programming (NLP) problem: min { f(x) | g(x) <= 0 }. To use explicit functions for gradient, replace gradient (fg, x._1 + s) with gradientD (df, x._1 + s). This method uses multiple random restarts.

Value parameters

n

the dimensionality of the search space

step_

the initial step size

toler

the tolerance

Attributes

Inherited from:
Minimizer