scalation.optimization

Members list

Type members

Classlikes

trait BoundsConstraint(lower: VectorD, upper: VectorD)

The BoundsConstraint trait provides a mechanism for bouncing back at constraint boundaries.

The BoundsConstraint trait provides a mechanism for bouncing back at constraint boundaries.

Attributes

Supertypes
class Object
trait Matchable
class Any
Known subtypes
class SPSA
class ConjugateGradient(f: FunctionV2S, g: FunctionV2S, ineq: Boolean, exactLS: Boolean) extends Minimizer

The ConjugateGradient class implements the Polak-Ribiere Conjugate Gradient (PR-CG) Algorithm for solving Non-Linear Programming (NLP) problems. PR-CG determines a search direction as a weighted combination of the steepest descent direction (-gradient) and the previous direction. The weighting is set by the beta function, which for this implementation used the Polak-Ribiere technique.

The ConjugateGradient class implements the Polak-Ribiere Conjugate Gradient (PR-CG) Algorithm for solving Non-Linear Programming (NLP) problems. PR-CG determines a search direction as a weighted combination of the steepest descent direction (-gradient) and the previous direction. The weighting is set by the beta function, which for this implementation used the Polak-Ribiere technique.

dir_k = - grad (x) + beta * dir_k-1

minimize f(x) subject to g(x) <= 0 [ optionally g(x) == 0 ]

Value parameters

exactLS

whether to use exact (e.g., GoldenLS) or inexact (e.g., WolfeLS) Line Search

f

the objective function to be minimized

g

the constraint function to be satisfied, if any

ineq

whether the constraint function must satisfy inequality or equality

Attributes

Supertypes
trait Minimizer
class Object
trait Matchable
class Any

The ConjugateGradient_NoLS class implements the Polak-Ribiere Conjugate Gradient (PR-CG) Algorithm for solving Non-Linear Programming (NLP) problems. PR-CG determines a search direction as a weighted combination of the steepest descent direction (-gradient) and the previous direction. The weighting is set by the beta function, which for this implementation used the Polak-Ribiere technique.

The ConjugateGradient_NoLS class implements the Polak-Ribiere Conjugate Gradient (PR-CG) Algorithm for solving Non-Linear Programming (NLP) problems. PR-CG determines a search direction as a weighted combination of the steepest descent direction (-gradient) and the previous direction. The weighting is set by the beta function, which for this implementation used the Polak-Ribiere technique.

dir_k = - grad (x) + beta * dir_k-1

min f(x)    where f: R^n -> R

This version does not use a line search algorithm (_NoLS)

Value parameters

f

the objective function to be minimized

Attributes

See also

ConjugateGradient for one that uses line search.

Supertypes
trait Minimize
class Object
trait Matchable
class Any
class CoordinateDescent(f: FunctionV2S, exactLS: Boolean) extends Minimizer

The CoordinateDescent class solves unconstrained Non-Linear Programming (NLP) problems using the Coordinate Descent algorithm. Given a function f and a starting point x0, the algorithm picks coordinate directions (cyclically) and takes steps in the those directions. The algorithm iterates until it converges.

The CoordinateDescent class solves unconstrained Non-Linear Programming (NLP) problems using the Coordinate Descent algorithm. Given a function f and a starting point x0, the algorithm picks coordinate directions (cyclically) and takes steps in the those directions. The algorithm iterates until it converges.

dir_k = kth coordinate direction

min f(x)

Value parameters

exactLS

whether to use exact (e.g., GoldenLS) or inexact (e.g., WolfeLS) Line Search

f

the vector-to-scalar objective function

Attributes

Supertypes
trait Minimizer
class Object
trait Matchable
class Any
class GoldenSectionLS(f: FunctionS2S, τ: Double) extends LineSearch

The GoldenSectionLS class performs a line search on 'f(x)' to find a minimal value for 'f'. It requires no derivatives and only one functional evaluation per iteration. A search is conducted from 'x1' (often 0) to 'xmax'. A guess for 'xmax' must be given, but can be made larger during the expansion phase, that occurs before the recursive golden section search is called. It works on scalar functions (see goldenSectionLSTest). If starting with a vector function 'f(x)', simply define a new function 'g(y) = x0 + direction * y' (see goldenSectionLSTest2).

The GoldenSectionLS class performs a line search on 'f(x)' to find a minimal value for 'f'. It requires no derivatives and only one functional evaluation per iteration. A search is conducted from 'x1' (often 0) to 'xmax'. A guess for 'xmax' must be given, but can be made larger during the expansion phase, that occurs before the recursive golden section search is called. It works on scalar functions (see goldenSectionLSTest). If starting with a vector function 'f(x)', simply define a new function 'g(y) = x0 + direction * y' (see goldenSectionLSTest2).

Value parameters

f

the scalar objective function to minimize

τ

the tolerance for breaking the iterations

Attributes

Supertypes
trait LineSearch
class Object
trait Matchable
class Any
class GradientDescent(f: FunctionV2S, exactLS: Boolean) extends Minimizer

The GradientDescent class solves unconstrained Non-Linear Programming (NLP) problems using the Gradient Descent algorithm. Given a function f and a starting point x0, the algorithm computes the gradient and takes steps in the opposite direction. The algorithm iterates until it converges. The class assumes that partial derivative functions are not available unless explicitly given via the setDerivatives method.

The GradientDescent class solves unconstrained Non-Linear Programming (NLP) problems using the Gradient Descent algorithm. Given a function f and a starting point x0, the algorithm computes the gradient and takes steps in the opposite direction. The algorithm iterates until it converges. The class assumes that partial derivative functions are not available unless explicitly given via the setDerivatives method.

dir_k = -gradient (x)

minimize f(x)

Value parameters

exactLS

whether to use exact (e.g., GoldenLS) or inexact (e.g., WolfeLS) Line Search

f

the vector-to-scalar objective function

Attributes

Supertypes
trait Minimizer
class Object
trait Matchable
class Any

The GradientDescent_Adam class provides functions to optimize the parameters (weights and biases) of Neural Networks with various numbers of layers. This optimizer implements an ADAptive Moment estimation (Adam) Optimizer.

The GradientDescent_Adam class provides functions to optimize the parameters (weights and biases) of Neural Networks with various numbers of layers. This optimizer implements an ADAptive Moment estimation (Adam) Optimizer.

Value parameters

f

the vector-to-scalar (V2S) objective/loss function

grad

the vector-to-vector (V2V) gradient function, grad f

hparam

the hyper-parameters

Attributes

See also
Supertypes
trait StoppingRule
trait Minimize
class Object
trait Matchable
class Any

The GradientDescent_Mo class provides functions to optimize the parameters (weights and biases) of Neural Networks with various numbers of layers. This optimizer implements a Gradient Descent with Momentum Optimizer.

The GradientDescent_Mo class provides functions to optimize the parameters (weights and biases) of Neural Networks with various numbers of layers. This optimizer implements a Gradient Descent with Momentum Optimizer.

Value parameters

f

the vector-to-scalar (V2S) objective/loss function

grad

the vector-to-vector (V2V) gradient function ∇f

hparam

the hyper-parameters

Attributes

See also
Supertypes
trait StoppingRule
trait Minimize
class Object
trait Matchable
class Any

The GradientDescent_Mo2 class provides functions to optimize the parameters (weights and biases) of Neural Networks with various numbers of layers. This optimizer implements a Gradient Descent with Momentum Optimizer.

The GradientDescent_Mo2 class provides functions to optimize the parameters (weights and biases) of Neural Networks with various numbers of layers. This optimizer implements a Gradient Descent with Momentum Optimizer.

Value parameters

f

the vector-to-scalar objective function

gr

the vector-to-gradient function

Attributes

See also
Supertypes
trait StoppingRule
trait Minimize
class Object
trait Matchable
class Any

The GradientDescent_NoLS class provides functions to optimize the parameters (weights and biases) of Neural Networks with various numbers of layers. This optimizer implements a Gradient Descent with No Line Search Optimizer.

The GradientDescent_NoLS class provides functions to optimize the parameters (weights and biases) of Neural Networks with various numbers of layers. This optimizer implements a Gradient Descent with No Line Search Optimizer.

Value parameters

f

the vector-to-scalar (V2S) objective/loss function

grad

the vector-to-vector (V2V) gradient function, grad f

hparam

the hyper-parameters

Attributes

See also
Supertypes
trait StoppingRule
trait Minimize
class Object
trait Matchable
class Any
object GridSearch

The GridSearch companion object specifies default minimums and maximums for the grid's coordinate axes.

The GridSearch companion object specifies default minimums and maximums for the grid's coordinate axes.

Attributes

Companion
class
Supertypes
class Object
trait Matchable
class Any
Self type
GridSearch.type
class GridSearch(f: FunctionV2S, n: Int, g: FunctionV2S, nSteps: Int) extends Minimizer

The GridSearch class performs grid search over an n-dimensional space to find a minimal objective value for f(x).

The GridSearch class performs grid search over an n-dimensional space to find a minimal objective value for f(x).

Value parameters

f

the objective function to be minimized

g

the constraint function to be satisfied, if any

n

the number of dimensions in search space

nSteps

the number of steps an axes is divided into to from the grid

Attributes

Companion
object
Supertypes
trait Minimizer
class Object
trait Matchable
class Any
class GridSearchLS(f: FunctionS2S) extends LineSearch

The GridSearchLS class performs a line search on f(x) to find a minimal value for f. It requires no derivatives and only one functional evaluation per iteration. A search is conducted from x1 (often 0) to xmax. A guess for xmax must be given. It works on scalar functions (see gridSearchLSTest). If starting with a vector function f(x), simply define a new function g(y) = x0 + direction * y (see gridSearchLSTest2).

The GridSearchLS class performs a line search on f(x) to find a minimal value for f. It requires no derivatives and only one functional evaluation per iteration. A search is conducted from x1 (often 0) to xmax. A guess for xmax must be given. It works on scalar functions (see gridSearchLSTest). If starting with a vector function f(x), simply define a new function g(y) = x0 + direction * y (see gridSearchLSTest2).

Value parameters

f

the scalar objective function to minimize

Attributes

Supertypes
trait LineSearch
class Object
trait Matchable
class Any
class IntegerTabuSearch(f: VectorI => Double, g: VectorI => Double, maxStep: Int)

The IntegerTabuSearch class performs tabu search to find minima of functions defined on integer vector domains z^n. Tabu search will not re-visit points already deemed sub-optimal.

The IntegerTabuSearch class performs tabu search to find minima of functions defined on integer vector domains z^n. Tabu search will not re-visit points already deemed sub-optimal.

minimize f(x) subject to g(x) <= 0, x in Z^n

Value parameters

f

the objective function to be minimize (f maps an integer vector to a double)

g

the constraint function to be satisfied, if any

maxStep

the maximum/starting step size (make larger for larger domains)

Attributes

Supertypes
class Object
trait Matchable
class Any
object LassoAdmm

The LassoAdmm object performs LASSO regression using Alternating Direction Method of Multipliers (ADMM). Minimize the following objective function to find an optimal solutions for x.

The LassoAdmm object performs LASSO regression using Alternating Direction Method of Multipliers (ADMM). Minimize the following objective function to find an optimal solutions for x.

argmin_x (1/2)||Ax − b||_2^2 + λ||x||_1

A = data matrix
b = response vector
λ = weighting on the l_1 penalty
x = solution (coefficient vector)

Attributes

See also

euler.stat.yale.edu/~tba3/stat612/lectures/lec23/lecture23.pdf

Supertypes
class Object
trait Matchable
class Any
Self type
LassoAdmm.type
trait LineSearch

The LineSearch trait specifies the basic methods for Line Search (LS) algorithms in classes extending this trait to implement. Line search is for one dimensional optimization problems. The algorithms perform line search to find an 'x'-value that minimizes a function 'f' that is passed into an implementing class.

The LineSearch trait specifies the basic methods for Line Search (LS) algorithms in classes extending this trait to implement. Line search is for one dimensional optimization problems. The algorithms perform line search to find an 'x'-value that minimizes a function 'f' that is passed into an implementing class.

x* = argmin f(x)

Attributes

Supertypes
class Object
trait Matchable
class Any
Known subtypes
object Minimize

The Minimize object defines the hyper-parameters for extending optimizers.

The Minimize object defines the hyper-parameters for extending optimizers.

Attributes

Companion
trait
Supertypes
class Object
trait Matchable
class Any
Self type
Minimize.type
trait Minimize

The Minimize trait sets the pattern for optimization algorithms for solving Non-Linear Programming (NLP) problems of the form:

The Minimize trait sets the pattern for optimization algorithms for solving Non-Linear Programming (NLP) problems of the form:

minimize f(x)

where f is the objective/loss function to be minimized

Attributes

Companion
object
Supertypes
class Object
trait Matchable
class Any
Known subtypes
trait Minimizer

The Minimizer trait sets the pattern for optimization algorithms for solving Non-Linear Programming (NLP) problems of the form:

The Minimizer trait sets the pattern for optimization algorithms for solving Non-Linear Programming (NLP) problems of the form:

minimize f(x) subject to g(x) <= 0 [ optionally g(x) == 0 ]

where f is the objective function to be minimized g is the constraint function to be satisfied, if any

Classes mixing in this trait must implement a function fg that rolls the constraints into the objective functions as penalties for constraint violation, a one-dimensional Line Search (LS) algorithm lineSearch and an iterative method (solve) that searches for improved solutions x-vectors with lower objective function values f(x).

Attributes

Companion
object
Supertypes
class Object
trait Matchable
class Any
Known subtypes
class BFGS
class LBFGS_B
class GridSearch
class SPSA
Show all
object Minimizer

The Minimizer object provides multiple testing functions.

The Minimizer object provides multiple testing functions.

Attributes

Companion
trait
Supertypes
class Object
trait Matchable
class Any
Self type
Minimizer.type

The MonitorEpochs trait is used to monitor the loss function over the epochs.

The MonitorEpochs trait is used to monitor the loss function over the epochs.

Attributes

Supertypes
class Object
trait Matchable
class Any
Known subtypes
class SPSA
class NelderMeadSimplex(f: FunctionV2S, n: Int) extends Minimize

The NelderMeadSimplex solves Non-Linear Programming (NLP) problems using the Nelder-Mead Simplex algorithm. Given a function f and its dimension n, the algorithm moves a simplex defined by n + 1 points in order to find an optimal solution. The algorithm is derivative-free.

The NelderMeadSimplex solves Non-Linear Programming (NLP) problems using the Nelder-Mead Simplex algorithm. Given a function f and its dimension n, the algorithm moves a simplex defined by n + 1 points in order to find an optimal solution. The algorithm is derivative-free.

minimize f(x)

The algorithm requires between 1 to n+2 function evaluations per iteration

Value parameters

f

the vector-to-scalar objective function

n

the dimension of the search space

Attributes

Supertypes
trait Minimize
class Object
trait Matchable
class Any
class NelderMeadSimplex2(f: FunctionV2S, n: Int, checkCon: Boolean, lower: VectorD, upper: VectorD) extends Minimizer, BoundsConstraint, MonitorEpochs

The NelderMeadSimplex2 solves Non-Linear Programming (NLP) problems using the Nelder-Mead Simplex algorithm. Given a function f and its dimension n, the algorithm moves a simplex defined by n + 1 points in order to find an optimal solution. The algorithm is derivative-free.

The NelderMeadSimplex2 solves Non-Linear Programming (NLP) problems using the Nelder-Mead Simplex algorithm. Given a function f and its dimension n, the algorithm moves a simplex defined by n + 1 points in order to find an optimal solution. The algorithm is derivative-free.

minimize f(x)

Value parameters

f

the vector-to-scalar objective function

n

the dimension of the search space

Attributes

Supertypes
trait Minimizer
class Object
trait Matchable
class Any
Show all
class NewtonRaphson(f: FunctionS2S) extends Minimize

The NewtonRaphson class is used to find roots (zeros) for a one-dimensional (scalar) function 'f'. The solve method finds zeros for function 'f', while the optimize method finds local optima using the same logic, but applied to first and second derivatives.

The NewtonRaphson class is used to find roots (zeros) for a one-dimensional (scalar) function 'f'. The solve method finds zeros for function 'f', while the optimize method finds local optima using the same logic, but applied to first and second derivatives.

Value parameters

f

the scalar function to find roots/optima of

Attributes

Supertypes
trait Minimize
class Object
trait Matchable
class Any
class Newton_NoLS(f: FunctionV2S, useLS: Boolean) extends Minimize

The Newton_NoLS class is used to find optima for functions of vectors. The solve method finds local optima using the Newton method that deflects the gradient using the inverse Hessian.

The Newton_NoLS class is used to find optima for functions of vectors. The solve method finds local optima using the Newton method that deflects the gradient using the inverse Hessian.

min f(x)    where f: R^n -> R

Value parameters

f

the vector to scalar function to find optima of

useLS

whether to use Line Search (LS)

Attributes

See also

Newton for one that uses a different line search.

Supertypes
trait Minimize
class Object
trait Matchable
class Any
trait PathMonitor

The PathMonitor trait specifies the logic needed to monitor a single path taken in a multidimensional graph.

The PathMonitor trait specifies the logic needed to monitor a single path taken in a multidimensional graph.

Classes mixing in this trait should call the clearPath method before beginning to monitor a path and then should call the add2Path method whenever a new data point is produced in the path being monitored. After that, a call to the getPath method will return a deep copy of the path that was monitored throughout the calculations.

Attributes

Supertypes
class Object
trait Matchable
class Any
Known subtypes
class BFGS
object DM_LBFGS
object LBFGS
class SPSA(f: FunctionV2S, max_iter: Int, checkCon: Boolean, lower: VectorD, upper: VectorD, debug_: Boolean) extends Minimizer, BoundsConstraint, MonitorEpochs

The SPSA class implements the Simultaneous Perturbation Stochastic Approximation algorithm for rough approximation of gradients.

The SPSA class implements the Simultaneous Perturbation Stochastic Approximation algorithm for rough approximation of gradients.

Value parameters

checkCon

whether to check bounds contraints

debug_

the whether to call in debug mode (does tracing)j

f

the vector to scalar function whose approximate gradient is sought

lower

the lower bounds vector

max_iter

the maximum number of iterations

upper

the upper bounds vector

Attributes

See also
Supertypes
trait Minimizer
class Object
trait Matchable
class Any
Show all
trait StoppingRule(upLimit: Int)

The StoppingRule trait provides stopping rules for early termination in iterative optimization algorithms.

The StoppingRule trait provides stopping rules for early termination in iterative optimization algorithms.

Value parameters

upLimit

the number upward (loss increasing) steps allowed

Attributes

Supertypes
class Object
trait Matchable
class Any
Known subtypes
class TabuSearch(f: VectorD => Double, g: VectorD => Double, maxStep: Double)

The TabuSearch class performs tabu search to find minima of functions defined on double vector domains z^n. Tabu search will not re-visit points already deemed sub-optimal.

The TabuSearch class performs tabu search to find minima of functions defined on double vector domains z^n. Tabu search will not re-visit points already deemed sub-optimal.

minimize f(x) subject to g(x) <= 0, x in Z^n

Value parameters

f

the objective function to be minimize (f maps an double vector to a double)

g

the constraint function to be satisfied, if any

maxStep

the maximum/starting step size (make larger for larger domains)

Attributes

Supertypes
class Object
trait Matchable
class Any
class WolfeConditions(f: FunctionV2S, var g: FunctionV2V, c1: Double, c2: Double)

The WolfeConditions class specifies conditions for inexact line search algorithms to acceptable/near minimal point along a given search direction p that exhibits (1) SDC: sufficient decrease (f(x) enough less that f(0)) and (2) CC: the slope at x is less steep than the slope at 0. That is, the line search looks for a value for x satisfying the two Wolfe conditions.

The WolfeConditions class specifies conditions for inexact line search algorithms to acceptable/near minimal point along a given search direction p that exhibits (1) SDC: sufficient decrease (f(x) enough less that f(0)) and (2) CC: the slope at x is less steep than the slope at 0. That is, the line search looks for a value for x satisfying the two Wolfe conditions.

f(x) <= f(0) + c1 * f'(0) * x Wolfe condition 1 (Armijo condition) f'(x) >= c2 * f'(0) Wolfe condition 2 (Weak version, more robust) |f'(x)| <= c2 * |f'(0)| Wolfe condition 2 (Strong version)

Note: c1 and c2 defaults below intended for Quasi Newton methods such as BFGS or L-BFGS

Value parameters

c1

constant for sufficient decrease (Wolfe condition 1: .0001 to .001)

c2

constant for curvature/slope constraint (Wolfe condition 2: .9 to .8)

f

the objective/loss function to minimize (vector-to-scalar)

g

the gradient of the objective/loss function (vector-to-vector)

Attributes

Supertypes
class Object
trait Matchable
class Any
class WolfeLS(f: FunctionS2S, c1: Double, c2: Double) extends LineSearch

The WolfeLS class performs an inexact line search on f to find a point x that exhibits (1) SDC: sufficient decrease (f(x) enough less that f(0)) and (2) CC: the slope at x is less steep than the slope at 0. That is, the line search looks for a value for x satisfying the two Wolfe conditions.

The WolfeLS class performs an inexact line search on f to find a point x that exhibits (1) SDC: sufficient decrease (f(x) enough less that f(0)) and (2) CC: the slope at x is less steep than the slope at 0. That is, the line search looks for a value for x satisfying the two Wolfe conditions.

f(x) <= f(0) + c1 * f'(0) * x Wolfe condition 1 (Armijo condition) |f'(x)| <= |c2 * f'(0)| Wolfe condition 2 (Strong version) f'(x) >= c2 * f'(0) Wolfe condition 2 (Weak version, more robust)

It works on scalar functions (@see wolfeLSTest). If starting with a vector function f(x), simply defines a new function fl(a) = x0 + direction * a (@see wolfeLSTest2).

Value parameters

c1

constant for sufficient decrease (Wolfe condition 1)

c2

constant for curvature/slope constraint (Wolfe condition 2)

f

the scalar objective function to minimize

Attributes

Supertypes
trait LineSearch
class Object
trait Matchable
class Any
class WolfeLS2(f: FunctionV2S, var g: FunctionV2V, c1: Double, c2: Double)

The WolfeLS2 class performs an inexact line search on f to find (1) a point x that exhibits (1) SDC: sufficient decrease (f(x) enough less that f(0)) and (2) CC: the slope at x is less steep than the slope at 0. That is, the line search looks for a value for x satisfying the two Wolfe conditions.

The WolfeLS2 class performs an inexact line search on f to find (1) a point x that exhibits (1) SDC: sufficient decrease (f(x) enough less that f(0)) and (2) CC: the slope at x is less steep than the slope at 0. That is, the line search looks for a value for x satisfying the two Wolfe conditions.

f(x) <= f(0) + c1 * f'(0) * x Wolfe condition 1 (Armijo condition) |f'(x)| <= |c2 * f'(0)| Wolfe condition 2 (Strong version) f'(x) >= c2 * f'(0) Wolfe condition 2 (Weak version, more robust)

The it uses bisection (or interpolative search) to find an approximate local minimal point. Currently, the strong version is not supported. Note: c1 and c2 defaults below intended for Quasi Newton methods such as BFGS or L-BFGS

Value parameters

c1

constant for sufficient decrease (Wolfe condition 1: .0001 to .001)

c2

constant for curvature/slope constraint (Wolfe condition 2: .9 to .8)

f

the objective/loss function to minimize

g

the gradient of the objective/loss function

Attributes

Supertypes
class Object
trait Matchable
class Any
class WolfeLS3(f: FunctionV2S, var g: FunctionV2V, c1: Double, c2: Double, c3: Double, eg: Double)

The WolfeLS3 class performs an inexact line search on f to find (1) a point x that exhibits (1) SDC: sufficient decrease (f(x) enough less that f(0)) and (2) CC: the slope at x is less steep than the slope at 0. That is, the line search looks for a value for x satisfying the two Wolfe conditions.

The WolfeLS3 class performs an inexact line search on f to find (1) a point x that exhibits (1) SDC: sufficient decrease (f(x) enough less that f(0)) and (2) CC: the slope at x is less steep than the slope at 0. That is, the line search looks for a value for x satisfying the two Wolfe conditions.

f(x) <= f(0) + c1 * f'(0) * x Wolfe condition 1 (Armijo condition) |f'(x)| <= |c2 * f'(0)| Wolfe condition 2 (Strong version) f'(x) >= c2 * f'(0) Wolfe condition 2 (Weak version, more robust)

The it uses bisection (or interpolative search) to find an approximate local minimal point. Currently, the strong version is not supported. Note: c1 and c2 defaults below intended for Quasi Newton methods such as BFGS or L-BFGS

Value parameters

c1

constant for sufficient decrease (Wolfe condition 1: .0001 to .001)

c2

constant for curvature/slope constraint (Wolfe condition 2: .9 to .8)

c3

constant for noise control condition

eg

estimate of gradient noise

f

the objective/loss function to minimize

g

the gradient of the objective/loss function

Attributes

Supertypes
class Object
trait Matchable
class Any
final class conjugateGradientTest

Attributes

Supertypes
class Object
trait Matchable
class Any

Attributes

Supertypes
class Object
trait Matchable
class Any

Attributes

Supertypes
class Object
trait Matchable
class Any

Attributes

Supertypes
class Object
trait Matchable
class Any

Attributes

Supertypes
class Object
trait Matchable
class Any

Attributes

Supertypes
class Object
trait Matchable
class Any
final class coordinateDescentTest

Attributes

Supertypes
class Object
trait Matchable
class Any
final class goldenSectionLSTest

Attributes

Supertypes
class Object
trait Matchable
class Any
final class goldenSectionLSTest2

Attributes

Supertypes
class Object
trait Matchable
class Any
final class gradientDescentTest

Attributes

Supertypes
class Object
trait Matchable
class Any
final class gradientDescentTest2

Attributes

Supertypes
class Object
trait Matchable
class Any
final class gradientDescentTest3

Attributes

Supertypes
class Object
trait Matchable
class Any
final class gradientDescentTest4

Attributes

Supertypes
class Object
trait Matchable
class Any

Attributes

Supertypes
class Object
trait Matchable
class Any

Attributes

Supertypes
class Object
trait Matchable
class Any

Attributes

Supertypes
class Object
trait Matchable
class Any

Attributes

Supertypes
class Object
trait Matchable
class Any
final class gridSearchLSTest

Attributes

Supertypes
class Object
trait Matchable
class Any
final class gridSearchLSTest2

Attributes

Supertypes
class Object
trait Matchable
class Any
final class gridSearchTest

Attributes

Supertypes
class Object
trait Matchable
class Any
final class gridSearchTest2

Attributes

Supertypes
class Object
trait Matchable
class Any
final class hungarianTest

Attributes

Supertypes
class Object
trait Matchable
class Any
final class hungarianTest2

Attributes

Supertypes
class Object
trait Matchable
class Any
final class integerTabuSearchTest

Attributes

Supertypes
class Object
trait Matchable
class Any

Attributes

Supertypes
class Object
trait Matchable
class Any
final class lassoAdmmTest

Attributes

Supertypes
class Object
trait Matchable
class Any
final class lassoAdmmTest2

Attributes

Supertypes
class Object
trait Matchable
class Any
final class lassoAdmmTest3

Attributes

Supertypes
class Object
trait Matchable
class Any
final class nLPTest

Attributes

Supertypes
class Object
trait Matchable
class Any
final class nLPTest2

Attributes

Supertypes
class Object
trait Matchable
class Any

Attributes

Supertypes
class Object
trait Matchable
class Any
final class nelderMeadSimplexTest

Attributes

Supertypes
class Object
trait Matchable
class Any
final class newtonRaphsonTest

Attributes

Supertypes
class Object
trait Matchable
class Any
final class newtonRaphsonTest2

Attributes

Supertypes
class Object
trait Matchable
class Any
final class newtonRaphsonTest3

Attributes

Supertypes
class Object
trait Matchable
class Any
final class newton_NoLSTest

Attributes

Supertypes
class Object
trait Matchable
class Any
final class newton_NoLSTest2

Attributes

Supertypes
class Object
trait Matchable
class Any
final class newton_NoLSTest3

Attributes

Supertypes
class Object
trait Matchable
class Any
final class sPSATest

Attributes

Supertypes
class Object
trait Matchable
class Any
final class tabuSearchTest

Attributes

Supertypes
class Object
trait Matchable
class Any
final class tabuSearchTest2

Attributes

Supertypes
class Object
trait Matchable
class Any
final class wolfeConditionsTest

Attributes

Supertypes
class Object
trait Matchable
class Any
final class wolfeLS2Test

Attributes

Supertypes
class Object
trait Matchable
class Any
final class wolfeLS2Test2

Attributes

Supertypes
class Object
trait Matchable
class Any
final class wolfeLS2Test3

Attributes

Supertypes
class Object
trait Matchable
class Any
final class wolfeLS2Test4

Attributes

Supertypes
class Object
trait Matchable
class Any
final class wolfeLS3Test

Attributes

Supertypes
class Object
trait Matchable
class Any
final class wolfeLS3Test2

Attributes

Supertypes
class Object
trait Matchable
class Any
final class wolfeLS3Test3

Attributes

Supertypes
class Object
trait Matchable
class Any
final class wolfeLS3Test4

Attributes

Supertypes
class Object
trait Matchable
class Any
final class wolfeLSTest

Attributes

Supertypes
class Object
trait Matchable
class Any
final class wolfeLSTest2

Attributes

Supertypes
class Object
trait Matchable
class Any
final class wolfeLSTest3

Attributes

Supertypes
class Object
trait Matchable
class Any

Types

type FuncVec = (Double, VectorD)

Type definition: Tuple of the functional value f(x) and the point/vector x

Type definition: Tuple of the functional value f(x) and the point/vector x

Attributes

Value members

Concrete methods

inline def better(cand: FuncVec, best: FuncVec): FuncVec

Return the better solution, the one with smaller functional value.

Return the better solution, the one with smaller functional value.

Value parameters

best

the best solution found so far

cand

the candidate solution (functional value f and vector x)

Attributes

def blown(cand: FuncVec): Boolean

Check whether the candidate solution has blown up.

Check whether the candidate solution has blown up.

Value parameters

cand

the candidate solution (functional value f and vector x)

Attributes

def conjugateGradientTest(): Unit

The conjugateGradientTest main function is used to test the `ConjugateGradient class. f(x) = (x_0 - 3)^2 + (x_1 - 4)^2 + 1

The conjugateGradientTest main function is used to test the `ConjugateGradient class. f(x) = (x_0 - 3)^2 + (x_1 - 4)^2 + 1

runMain scalation.optimization.conjugateGradientTest

Attributes

The conjugateGradientTest2 main function is used to test the ConjugateGradient class. f(x) = x_0^4 + (x_0 - 3)^2 + (x_1 - 4)^2 + 1

The conjugateGradientTest2 main function is used to test the ConjugateGradient class. f(x) = x_0^4 + (x_0 - 3)^2 + (x_1 - 4)^2 + 1

runMain scalation.optimization.conjugateGradientTest2

Attributes

The conjugateGradientTest3 main function is used to test the ConjugateGradient class. f(x) = 1/x_0 + x_0^4 + (x_0 - 3)^2 + (x_1 - 4)^2 + 1

The conjugateGradientTest3 main function is used to test the ConjugateGradient class. f(x) = 1/x_0 + x_0^4 + (x_0 - 3)^2 + (x_1 - 4)^2 + 1

runMain scalation.optimization.conjugateGradientTest3

Attributes

The conjugateGradient_NoLSTest main function is used to test the `ConjugateGradient_NoLS class. f(x) = (x_0 - 3)^2 + (x_1 - 4)^2 + 1

The conjugateGradient_NoLSTest main function is used to test the `ConjugateGradient_NoLS class. f(x) = (x_0 - 3)^2 + (x_1 - 4)^2 + 1

runMain scalation.optimization.conjugateGradient_NoLSTest

Attributes

The conjugateGradient_NoLSTest2 main function is used to test the ConjugateGradient_NoLS class. f(x) = x_0^4 + (x_0 - 3)^2 + (x_1 - 4)^2 + 1

The conjugateGradient_NoLSTest2 main function is used to test the ConjugateGradient_NoLS class. f(x) = x_0^4 + (x_0 - 3)^2 + (x_1 - 4)^2 + 1

runMain scalation.optimization.conjugateGradient_NoLSTest2

Attributes

The conjugateGradient_NoLSTest3 main function is used to test the ConjugateGradient_NoLS class. f(x) = 1/x_0 + x_0^4 + (x_0 - 3)^2 + (x_1 - 4)^2 + 1

The conjugateGradient_NoLSTest3 main function is used to test the ConjugateGradient_NoLS class. f(x) = 1/x_0 + x_0^4 + (x_0 - 3)^2 + (x_1 - 4)^2 + 1

runMain scalation.optimization.conjugateGradient_NoLSTest3

Attributes

def coordinateDescentTest(): Unit

The coordinateDescentTest main function is used to test the CoordinateDescent class.

The coordinateDescentTest main function is used to test the CoordinateDescent class.

runMain scalation.optimization.coordinateDescentTest

Attributes

def fastsThresh(v: VectorD, th: Double): VectorD

Return the fast soft thresholding vector function.

Return the fast soft thresholding vector function.

Value parameters

th

the threshold (theta)

v

the vector to threshold

Attributes

def goldenSectionLSTest(): Unit

The goldenSectionLSTest main function is used to test the GoldenSectionLS class on scalar functions.

The goldenSectionLSTest main function is used to test the GoldenSectionLS class on scalar functions.

runMain scalation.optimization.goldenSectionLSTest

Attributes

def goldenSectionLSTest2(): Unit

The goldenSectionLSTest2 main function is used to test the goldenSectionLS class on vector functions.

The goldenSectionLSTest2 main function is used to test the goldenSectionLS class on vector functions.

runMain scalation.optimization.goldenSectionLSTest

Attributes

def gradientDescentTest(): Unit

The gradientDescentTest main function is used to test the GradientDescent class. f(x) = (x_0 - 3)^2 + (x_1 - 4)^2 + 1

The gradientDescentTest main function is used to test the GradientDescent class. f(x) = (x_0 - 3)^2 + (x_1 - 4)^2 + 1

runMain scalation.optimization.gradientDescentTest

Attributes

def gradientDescentTest2(): Unit

The gradientDescentTest2 main function is used to test the GradientDescent class. f(x) = x_0^4 + (x_0 - 3)^2 + (x_1 - 4)^2 + 1

The gradientDescentTest2 main function is used to test the GradientDescent class. f(x) = x_0^4 + (x_0 - 3)^2 + (x_1 - 4)^2 + 1

runMain scalation.optimization.gradientDescentTest2

Attributes

def gradientDescentTest3(): Unit

The gradientDescentTest3 main function is used to test the GradientDescent class. f(x) = 1/x(0) + x_0^4 + (x_0 - 3)^2 + (x_1 - 4)^2 + 1

The gradientDescentTest3 main function is used to test the GradientDescent class. f(x) = 1/x(0) + x_0^4 + (x_0 - 3)^2 + (x_1 - 4)^2 + 1

runMain scalation.optimization.gradientDescentTest3

Attributes

def gradientDescentTest4(): Unit

The gradientDescentTest4 main function is used to test the GradientDescent class. f(x) = x_0/4 + 5x_0^2 + x_0^4 - 9x_0^2 x_1 + 3x_1^2 + 2x_1^4

The gradientDescentTest4 main function is used to test the GradientDescent class. f(x) = x_0/4 + 5x_0^2 + x_0^4 - 9x_0^2 x_1 + 3x_1^2 + 2x_1^4

Attributes

See also

math.fullerton.edu/mathews/n2003/gradientsearch/GradientSearchMod/Links/GradientSearchMod_lnk_5.html

runMain scalation.optimization.gradientDescentTest4

The gradientDescent_AdamTest main function is used to test the GradientDescent_Adam class. f(x) = (x_0 - 3)^2 + (x_1 - 4)^2 + 1

The gradientDescent_AdamTest main function is used to test the GradientDescent_Adam class. f(x) = (x_0 - 3)^2 + (x_1 - 4)^2 + 1

runMain scalation.optimization.gradientDescent_AdamTest

Attributes

The gradientDescent_Mo2Test main function is used to test the GradientDescent_Mo2 class. f(x) = (x_0 - 3)^2 + (x_1 - 4)^2 + 1

The gradientDescent_Mo2Test main function is used to test the GradientDescent_Mo2 class. f(x) = (x_0 - 3)^2 + (x_1 - 4)^2 + 1

runMain scalation.optimization.gradientDescent_Mo2Test

Attributes

The gradientDescent_MoTest main function is used to test the GradientDescent_Mo class. f(x) = (x_0 - 3)^2 + (x_1 - 4)^2 + 1

The gradientDescent_MoTest main function is used to test the GradientDescent_Mo class. f(x) = (x_0 - 3)^2 + (x_1 - 4)^2 + 1

runMain scalation.optimization.gradientDescent_MoTest

Attributes

The gradientDescent_NoLSTest main function is used to test the GradientDescent_NoLS class. f(x) = (x_0 - 3)^2 + (x_1 - 4)^2 + 1

The gradientDescent_NoLSTest main function is used to test the GradientDescent_NoLS class. f(x) = (x_0 - 3)^2 + (x_1 - 4)^2 + 1

runMain scalation.optimization.gradientDescent_NoLSTest

Attributes

def gridSearchLSTest(): Unit

The gridSearchLSTest main function is used to test the GridSearchLS class on scalar functions.

The gridSearchLSTest main function is used to test the GridSearchLS class on scalar functions.

runMain scalation.optimization.gridSearchLSTest

Attributes

def gridSearchLSTest2(): Unit

The gridSearchLSTest2 main function is used to test the GridSearchLS class on vector functions.

The gridSearchLSTest2 main function is used to test the GridSearchLS class on vector functions.

runMain scalation.optimization.gridSearchLSTest

Attributes

def gridSearchTest(): Unit

The gridSearchTest main function is used to test the GridSearch class on f(x): f(x) = (x_0 - 3)^2 + (x_1 - 4)^2 + 1

The gridSearchTest main function is used to test the GridSearch class on f(x): f(x) = (x_0 - 3)^2 + (x_1 - 4)^2 + 1

runMain scalation.optimization.gridSearchTest

Attributes

def gridSearchTest2(): Unit

The gridSearchTest2 main function is used to test the GridSearch class on f(x): f(x) = x_0^4 + (x_0 - 3)^2 + (x_1 - 4)^2 + 1

The gridSearchTest2 main function is used to test the GridSearch class on f(x): f(x) = x_0^4 + (x_0 - 3)^2 + (x_1 - 4)^2 + 1

runMain scalation.optimization.gridSearchTest2

Attributes

def hungarian(cost: MatrixD): (VectorI, VectorD)

The hungarian method is an O(n^3) [ O(J^2W) ] implementation of the Hungarian algorithm (or Kuhn-Munkres algorithm) for assigning jobs to workers. Given J jobs and W workers, find a minimal cost assignment of JOBS to WORKERS such that each worker is assigned to at most one job and each job has one worker assigned. It solves the minimum-weighted bipartite graph matching problem.

The hungarian method is an O(n^3) [ O(J^2W) ] implementation of the Hungarian algorithm (or Kuhn-Munkres algorithm) for assigning jobs to workers. Given J jobs and W workers, find a minimal cost assignment of JOBS to WORKERS such that each worker is assigned to at most one job and each job has one worker assigned. It solves the minimum-weighted bipartite graph matching problem.

minimize sum_j { cost(j, w) }

Value parameters

cost

the cost matrix: cost(j, w) = cost of assigning job j to worker w

Attributes

def hungarianTest(): Unit

The hungarianTest main function test the hungarianTest method.

The hungarianTest main function test the hungarianTest method.

Attributes

See also

http://people.whitman.edu/~hundledr/courses/M339S20/M339/Ch07_5.pdf Minimal total cost = 51

runMain scalation.optimization.hungarianTest

def hungarianTest2(): Unit

The hungarianTest2 main function test the hungarianTest method.

The hungarianTest2 main function test the hungarianTest method.

Attributes

See also

https://d13mk4zmvuctmz.cloudfront.net/assets/main/study-material/notes/ electrical-engineering_engineering_operations-research_assignment-problems_notes.pdf Solution: job 4 assigned to worker 0 with cost (j = 4, w = 0) = 1.0 job 1 assigned to worker 1 with cost (j = 1, w = 1) = 5.0 job 0 assigned to worker 2 with cost (j = 0, w = 2) = 3.0 job 2 assigned to worker 3 with cost (j = 2, w = 3) = 2.0 job 3 assigned to worker 4 with cost (j = 3, w = 4) = 9.0 job -1 assigned to worker 5 with cost (j = -1, w = 5) = NA (worker 5 is unassigned) Minimal total cost = 20

runMain scalation.optimization.hungarianTest2

def integerTabuSearchTest(): Unit

The integerTabuSearchTest main method is used to test the IntegerTabuSearch class (unconstrained).

The integerTabuSearchTest main method is used to test the IntegerTabuSearch class (unconstrained).

runMain scalation.optimization.integerTabuSearchTest

Attributes

The integerTabuSearchTest2 main method is used to test the IntegerTabuSearch class (constrained).

The integerTabuSearchTest2 main method is used to test the IntegerTabuSearch class (constrained).

runMain scalation.optimization.integerTabuSearchTest2

Attributes

def lassoAdmmTest(): Unit

The lassoAdmmTest main function tests LassoAdmm object using the following regression equation. y = b dot x = b_0 + b_1x_1 + b_2x_2.

The lassoAdmmTest main function tests LassoAdmm object using the following regression equation. y = b dot x = b_0 + b_1x_1 + b_2x_2.

Attributes

See also

statmaster.sdu.dk/courses/st111/module03/index.html

runMain scalation.optimization.lassoAdmmTest

def lassoAdmmTest2(): Unit

The lassoAdmmTest2 main function tests LassoAdmm object using the following regression equation. y = b dot x = b_0 + b_1x_1 + b_2x_2.

The lassoAdmmTest2 main function tests LassoAdmm object using the following regression equation. y = b dot x = b_0 + b_1x_1 + b_2x_2.

Attributes

See also

www.cs.jhu.edu/~svitlana/papers/non_refereed/optimization_1.pdf

runMain scalation.optimization.lassoAdmmTest2

def lassoAdmmTest3(): Unit

The lassoAdmmTest3 main function tests LassoAdmm object use of soft-thresholding.

The lassoAdmmTest3 main function tests LassoAdmm object use of soft-thresholding.

runMain scalation.optimization.lassoAdmmTest3

Attributes

def nLPTest(): Unit

The nLPTest main function used to test several Non-Linear Programming (NLP) algorithms on unconstrained problems. Algorithms: - Gradient Descent with Golden Section Line Search - Polak-Ribiere Conjugate Gradient with Golden Section Line Search - Gradient Descent with Wolfe Line Search (option ib BFGS) - Broyden–Fletcher–Goldfarb–Shanno (BFGS) with Wolfe Line Search - Limited Memory Broyden–Fletcher–Goldfarb–Shanno (LBFGS) with Wolfe Line Search - Limited Memory Broyden–Fletcher–Goldfarb–Shanno Bounded (LBFGS_B) with Wolfe Line Search - Nelder-Mead Simplex - Coordinate Descent - Grid Search

The nLPTest main function used to test several Non-Linear Programming (NLP) algorithms on unconstrained problems. Algorithms: - Gradient Descent with Golden Section Line Search - Polak-Ribiere Conjugate Gradient with Golden Section Line Search - Gradient Descent with Wolfe Line Search (option ib BFGS) - Broyden–Fletcher–Goldfarb–Shanno (BFGS) with Wolfe Line Search - Limited Memory Broyden–Fletcher–Goldfarb–Shanno (LBFGS) with Wolfe Line Search - Limited Memory Broyden–Fletcher–Goldfarb–Shanno Bounded (LBFGS_B) with Wolfe Line Search - Nelder-Mead Simplex - Coordinate Descent - Grid Search

runMain scalation.optimization.nLPTest

Attributes

def nLPTest2(): Unit

The nLPTest2 main function used to test several Non-Linear Programming (NLP) algorithms on constrained problems. FIX

The nLPTest2 main function used to test several Non-Linear Programming (NLP) algorithms on constrained problems. FIX

runMain scalation.optimization.nLPTest2

Attributes

The nelderMeadSimplex2Test main function is used to test the NelderMeadSimplex2 class.

The nelderMeadSimplex2Test main function is used to test the NelderMeadSimplex2 class.

runMain scalation.optimization.nelderMeadSimplex2Test

Attributes

def nelderMeadSimplexTest(): Unit

The nelderMeadSimplexTest main function is used to test the NelderMeadSimplex class.

The nelderMeadSimplexTest main function is used to test the NelderMeadSimplex class.

runMain scalation.optimization.nelderMeadSimplexTest

Attributes

def newtonRaphsonTest(): Unit

The newtonRaphsonTest main function is used to test the NewtonRaphson class. This test passes in a function for the derivative to find a root.

The newtonRaphsonTest main function is used to test the NewtonRaphson class. This test passes in a function for the derivative to find a root.

runMain scalation.optimization.newtonRaphsonTest

Attributes

def newtonRaphsonTest2(): Unit

The newtonRaphsonTest2 main function is used to test the NewtonRaphson class. This test numerically approximates the derivative to find a root.

The newtonRaphsonTest2 main function is used to test the NewtonRaphson class. This test numerically approximates the derivative to find a root.

runMain scalation.optimization.newtonRaphsonTest2

Attributes

def newtonRaphsonTest3(): Unit

The newtonRaphsonTest3 main function is used to test the NewtonRaphson class. This test numerically approximates the derivatives to find minima.

The newtonRaphsonTest3 main function is used to test the NewtonRaphson class. This test numerically approximates the derivatives to find minima.

runMain scalation.optimization.newtonRaphsonTest3

Attributes

def newton_NoLSTest(): Unit

The newton_NoLSTest main function is used to test the Newton_NoLS class. This test numerically approximates the first derivative (gradient) and the second derivative (Hessian) to find minima.

The newton_NoLSTest main function is used to test the Newton_NoLS class. This test numerically approximates the first derivative (gradient) and the second derivative (Hessian) to find minima.

runMain scalation.optimization.newton_NoLSTest

Attributes

def newton_NoLSTest2(): Unit

The newton_NoLSTest2 main function is used to test the Newton_NoLS class. This test functionally evaluates the first derivative (gradient) and uses the Jacobian to numerically compute the second derivative (Hessian) from the gradient to find minima.

The newton_NoLSTest2 main function is used to test the Newton_NoLS class. This test functionally evaluates the first derivative (gradient) and uses the Jacobian to numerically compute the second derivative (Hessian) from the gradient to find minima.

runMain scalation.optimization.newton_NoLSTest2

Attributes

def newton_NoLSTest3(): Unit

The newton_NoLSTest3 main function is used to test the Newton_NoLS class. This test uses the Rosenbrock function.

The newton_NoLSTest3 main function is used to test the Newton_NoLS class. This test uses the Rosenbrock function.

runMain scalation.optimization.newton_NoLSTest3

Attributes

def sPSATest(): Unit

The sPSATest main function tests the SPSA class.

The sPSATest main function tests the SPSA class.

runMain scalation.optimization.sPSATest

Attributes

def showAssignments(job_cost: (VectorI, VectorD), cost: MatrixD): Unit

Show the assignments of jobs to workers and the accumulating costs.

Show the assignments of jobs to workers and the accumulating costs.

Value parameters

cost

the cost matrix: cost(j, w) = cost of assigning job j to worker w

job_acost

the (job, acost) tuple

Attributes

def softThresh(x: Double, th: Double): Double

Return the soft thresholding scalar function.

Return the soft thresholding scalar function.

Value parameters

th

the threshold (theta)

x

the scalar to threshold

Attributes

def tabuSearchTest(): Unit

The tabuSearchTest main method is used to test the TabuSearch class (unconstrained).

The tabuSearchTest main method is used to test the TabuSearch class (unconstrained).

runMain scalation.optimization.tabuSearchTest

Attributes

def tabuSearchTest2(): Unit

The tabuSearchTest2 main method is used to test the TabuSearch class (constrained).

The tabuSearchTest2 main method is used to test the TabuSearch class (constrained).

runMain scalation.optimization.tabuSearchTest2

Attributes

def wolfeConditionsTest(): Unit

The wolfeConditionsTest main function is used to test the WolfeConditions class.

The wolfeConditionsTest main function is used to test the WolfeConditions class.

runMain scalation.optimization.wolfeConditionsTest

Attributes

def wolfeLS2Test(): Unit

The wolfeLS2Test main function is used to test the WolfeLS2 class on scalar functions.

The wolfeLS2Test main function is used to test the WolfeLS2 class on scalar functions.

runMain scalation.optimization.wolfeLS2Test

Attributes

def wolfeLS2Test2(): Unit

The wolfeLS2Test2 main function is used to test the WolfeLS2 class on scalar functions.

The wolfeLS2Test2 main function is used to test the WolfeLS2 class on scalar functions.

runMain scalation.optimization.wolfeLS2Test2

Attributes

def wolfeLS2Test3(): Unit

The wolfeLS2Test3 main function is used to test the WolfeLS2 class on vector functions. This test uses the Rosenbrock function.

The wolfeLS2Test3 main function is used to test the WolfeLS2 class on vector functions. This test uses the Rosenbrock function.

Attributes

See also

https://mikl.dk/post/2019-wolfe-conditions/

runMain scalation.optimization.wolfeLS2Test3

def wolfeLS2Test4(): Unit

The wolfeLS2Test4 main function is used to test the WolfeLS2 class on scalar functions.

The wolfeLS2Test4 main function is used to test the WolfeLS2 class on scalar functions.

runMain scalation.optimization.wolfeLSTest4

Attributes

def wolfeLS3Test(): Unit

The wolfeLS3Test main function is used to test the WolfeLS3 class on scalar functions.

The wolfeLS3Test main function is used to test the WolfeLS3 class on scalar functions.

runMain scalation.optimization.wolfeLS3Test

Attributes

def wolfeLS3Test2(): Unit

The wolfeLS3Test2 main function is used to test the WolfeLS3 class on scalar functions.

The wolfeLS3Test2 main function is used to test the WolfeLS3 class on scalar functions.

runMain scalation.optimization.wolfeLS3Test2

Attributes

def wolfeLS3Test3(): Unit

The wolfeLS3Test3 main function is used to test the WolfeLS3 class on vector functions. This test uses the Rosenbrock function.

The wolfeLS3Test3 main function is used to test the WolfeLS3 class on vector functions. This test uses the Rosenbrock function.

Attributes

See also

https://mikl.dk/post/2019-wolfe-conditions/

runMain scalation.optimization.wolfeLS3Test3

def wolfeLS3Test4(): Unit

The wolfeLS3Test4 main function is used to test the WolfeLS3 class on scalar functions.

The wolfeLS3Test4 main function is used to test the WolfeLS3 class on scalar functions.

runMain scalation.optimization.wolfeLS3Test4

Attributes

def wolfeLSTest(): Unit

The wolfeLSTest main function is used to test the WolfeLS class on scalar functions.

The wolfeLSTest main function is used to test the WolfeLS class on scalar functions.

runMain scalation.optimization.wolfeLSTest

Attributes

def wolfeLSTest2(): Unit

The wolfeLSTest2 main function is used to test the WolfeLS class on vector functions.

The wolfeLSTest2 main function is used to test the WolfeLS class on vector functions.

runMain scalation.optimization.wolfeLSTest2

Attributes

def wolfeLSTest3(): Unit

The wolfeLSTest3 main function is used to test the WolfeLS2 class on vector functions. This test uses the Rosenbrock function.

The wolfeLSTest3 main function is used to test the WolfeLS2 class on vector functions. This test uses the Rosenbrock function.

Attributes

See also

https://mikl.dk/post/2019-wolfe-conditions/

runMain scalation.optimization.wolfeLSTest3

Concrete fields

val G_RATIO: Double

the golden ratio (1.618033988749895)

the golden ratio (1.618033988749895)

Attributes

val G_SECTION: Double

the golden section number (0.6180339887498949)

the golden section number (0.6180339887498949)

Attributes