SPSA

scalation.optimization.SPSA
class SPSA(f: FunctionV2S, max_iter: Int, checkCon: Boolean, lower: VectorD, upper: VectorD, debug_: Boolean) extends Minimizer, BoundsConstraint, MonitorEpochs

The SPSA class implements the Simultaneous Perturbation Stochastic Approximation algorithm for rough approximation of gradients.

Value parameters

checkCon

whether to check bounds contraints

debug_

the whether to call in debug mode (does tracing)j

f

the vector to scalar function whose approximate gradient is sought

lower

the lower bounds vector

max_iter

the maximum number of iterations

upper

the upper bounds vector

Attributes

See also
Graph
Supertypes
trait Minimizer
class Object
trait Matchable
class Any
Show all

Members list

Inherited
  • Not inherited
  • BoundsConstraint
  • Minimizer
  • MonitorEpochs
Visibility
  • public
  • protected

Value members

Concrete methods

def bernoulliVec(n: Int, p: Double, stream: Int): VectorD

Return a random vector of {-1, 1} values.

Return a random vector of {-1, 1} values.

Value parameters

n

the size of the vector

p

the probability of 1

stream

the random number stream

Attributes

def lineSearch(x: VectorD, dir: VectorD, step: Double): Double

This method is not supported.

This method is not supported.

Attributes

def reset(params: VectorD): Unit

Reset the parameters.

Reset the parameters.

Value parameters

params

the given starting parameters of a VectorD

Attributes

def solve(x0: VectorD, step: Double, toler: Double): FuncVec

Solve for an optimal point by moving a distance ak in the -ghat direction.

Solve for an optimal point by moving a distance ak in the -ghat direction.

Value parameters

step

steps for iteration

toler

tolerance

x0

initial point

Attributes

Inherited methods

def constrain(x: VectorD): Unit

Constraint the current point x, so that lower <= x <= upper by bouncing back from a violated contraint. !@param x the current point

Constraint the current point x, so that lower <= x <= upper by bouncing back from a violated contraint. !@param x the current point

Attributes

Inherited from:
BoundsConstraint
def fg(x: VectorD): Double

The objective function f plus a weighted penalty based on the constraint function g. Override for constrained optimization and ignore for unconstrained optimization.

The objective function f plus a weighted penalty based on the constraint function g. Override for constrained optimization and ignore for unconstrained optimization.

Value parameters

x

the coordinate values of the current point

Attributes

Inherited from:
Minimizer
def lossPerEpoch(): ArrayBuffer[Double]

Return the loss function for each epoch.

Return the loss function for each epoch.

Attributes

Inherited from:
MonitorEpochs
def plotLoss(): Unit

Attributes

Inherited from:
MonitorEpochs
def resolve(n: Int, step_: Double, toler: Double): FuncVec

Solve the following Non-Linear Programming (NLP) problem: min { f(x) | g(x) <= 0 }. To use explicit functions for gradient, replace gradient (fg, x._1 + s) with gradientD (df, x._1 + s). This method uses multiple random restarts.

Solve the following Non-Linear Programming (NLP) problem: min { f(x) | g(x) <= 0 }. To use explicit functions for gradient, replace gradient (fg, x._1 + s) with gradientD (df, x._1 + s). This method uses multiple random restarts.

Value parameters

n

the dimensionality of the search space

step_

the initial step size

toler

the tolerance

Attributes

Inherited from:
Minimizer

Inherited fields

protected val epochLoss: ArrayBuffer[Double]

Attributes

Inherited from:
MonitorEpochs