package minima
The minima
package contains classes, traits and objects for
optimization to find minima.
- Alphabetic
- By Inheritance
- minima
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Type Members
-
class
Brent extends AnyRef
The
Brent
class is used to find roots (zeros) for a one-dimensional (scalar) function 'g'.The
Brent
class is used to find roots (zeros) for a one-dimensional (scalar) function 'g'. Depending on the FunctionSelector, it can find zeros for derivatives or finite differences, which may indicate optima for function 'g'. The code is directly translated from the following:- See also
math.haifa.ac.il/ronn/NA/NAprogs/brent.java
-
class
CheckLP extends Error
The
CheckLP
class checks the solution to Linear Programming (LP) problems.The
CheckLP
class checks the solution to Linear Programming (LP) problems. Given a constraint matrix 'a', limit/RHS vector 'b' and cost vector 'c', determine if the values for the solution/decision vector 'x' minimizes the objective function 'f(x)', while satisfying all of the constraints, i.e.,minimize f(x) = c x subject to a x <= b, x >= 0
Check the feasibility and optimality of the solution.
-
class
ConjugateGradient extends Minimizer with Error
The
ConjugateGradient
class implements the Polak-Ribiere Conjugate Gradient (PR-CG) Algorithm for solving Non-Linear Programming (NLP) problems.The
ConjugateGradient
class implements the Polak-Ribiere Conjugate Gradient (PR-CG) Algorithm for solving Non-Linear Programming (NLP) problems. PR-CG determines a search direction as a weighted combination of the steepest descent direction (-gradient) and the previous direction. The weighting is set by the beta function, which for this implementation used the Polak-Ribiere technique.dir_k = -gradient (x) + beta * dir_k-1
minimize f(x) subject to g(x) <= 0 [ optionally g(x) == 0 ]
-
class
CoordinateDescent extends Minimizer with Error
The
CoordinateDescent
class solves unconstrained Non-Linear Programming (NLP) problems using the Coordinate Descent algorithm.The
CoordinateDescent
class solves unconstrained Non-Linear Programming (NLP) problems using the Coordinate Descent algorithm. Given a function 'f' and a starting point 'x0', the algorithm picks coordinate directions (cyclically) and takes steps in the those directions. The algorithm iterates until it converges.dir_k = kth coordinate direction
minimize f(x)
-
class
DualSimplex extends MinimizerLP
The
DualSimplex
class solves Linear Programming (LP) problems using a tableau based Dual Simplex Algorithm.The
DualSimplex
class solves Linear Programming (LP) problems using a tableau based Dual Simplex Algorithm. It is particularly useful when re-optimizing after a constraint has been added. The algorithm starts with an infeasible super-optimal solution and moves toward (primal) feasibility and optimality.Given a constraint matrix 'a', limit/RHS vector 'b' and cost vector 'c', find values for the solution/decision vector 'x' that minimize the objective function f(x), while satisfying all of the constraints, i.e.,
minimize f(x) = c x subject to a x <= b, x >= 0
Creates an 'MM-by-NN' simplex tableau with -- [0..M-1, 0..N-1] = a (constraint matrix) -- [0..M-1, N..M+N-1] = s (slack/surplus variable matrix) -- [0..M-1, NN-1] = b (limit/RHS vector) -- [M, 0..NN-2] = c (cost vector)
-
class
GeneticAlgorithm extends AnyRef
The
GeneticAlgorithm
class performs local search to find minima of functions defined on integer vector domains (z^n).The
GeneticAlgorithm
class performs local search to find minima of functions defined on integer vector domains (z^n).minimize f(x) subject to g(x) <= 0, x in Z^n
-
class
GoldenSectionLS extends LineSearch
The
GoldenSectionLS
class performs a line search on 'f(x)' to find a minimal value for 'f'.The
GoldenSectionLS
class performs a line search on 'f(x)' to find a minimal value for 'f'. It requires no derivatives and only one functional evaluation per iteration. A search is conducted from 'x1' (often 0) to 'xmax'. A guess for 'xmax' must be given, but can be made larger during the expansion phase, that occurs before the recursive golden section search is called. It works on scalar functions (seeGoldenSectionLSTest
). If starting with a vector function 'f(x)', simply define a new function 'g(y) = x0 + direction * y' (seeGoldenSectionLSTest2
). -
class
GradientDescent extends Minimizer with Error
The
GradientDescent
class solves unconstrained Non-Linear Programming (NLP) problems using the Gradient Descent algorithm.The
GradientDescent
class solves unconstrained Non-Linear Programming (NLP) problems using the Gradient Descent algorithm. Given a function 'f' and a starting point 'x0', the algorithm computes the gradient and takes steps in the opposite direction. The algorithm iterates until it converges. The class assumes that partial derivative functions are not available unless explicitly given via the 'setDerivatives' method.dir_k = -gradient (x)
minimize f(x)
-
class
GridLS extends LineSearch
The
GridLS
class performs a line search on 'f(x)' to find a minimal value for 'f'.The
GridLS
class performs a line search on 'f(x)' to find a minimal value for 'f'. It requires no derivatives and only one functional evaluation per iteration. A search is conducted from 'x1' (often 0) to 'xmax'. A guess for 'xmax' must be given. It works on scalar functions (seeGridLSTest
). If starting with a vector function 'f(x)', simply define a new function 'g(y) = x0 + direction * y' (seeGridLSTest2
). -
class
IntegerGoldenSectionLS extends AnyRef
The
IntegerGoldenSectionLS
class performs a line search on 'f(x)' to find a minimal value for 'f'.The
IntegerGoldenSectionLS
class performs a line search on 'f(x)' to find a minimal value for 'f'. It requires no derivatives and only one functional evaluation per iteration. A search is conducted from 'x1' (often 0) to 'xmax'. A guess for 'xmax' must be given, but can be made larger during the expansion phase, that occurs before the recursive golden section search is called. It works on scalar functions- See also
IntegerGoldenSectionLSTest. If starting with a vector function 'f(x)', simply define a new function 'g(y) = x0 + direction * y'.
IntegerGoldenSectionLSTest2.
-
class
IntegerLP extends AnyRef
The
IntegerLP
class solves Integer Linear Programming (ILP) and Mixed Integer Linear Programming (MILP) problems recursively using the Simplex algorithm.The
IntegerLP
class solves Integer Linear Programming (ILP) and Mixed Integer Linear Programming (MILP) problems recursively using the Simplex algorithm. First, an LP problem is solved. If the optimal solution vector x is entirely integer valued, the ILP is solved. If not, pick the first 'x_j' that is not integer valued. Define two new LP problems which bound 'x_j' to the integer below and above, respectively. Branch by solving each of these LP problems in turn. Prune by not exploring branches less optimal than the currently best integer solution. This technique is referred to as Branch and Bound. An exclusion set may be optionally provided for MILP problems. FIX: Use the Dual Simplex Algorithm for better performance.Given a constraint matrix 'a', limit/RHS vector 'b' and cost vector 'c', find values for the solution/decision vector 'x' that minimize the objective function 'f(x)', while satisfying all of the constraints, i.e.,
minimize f(x) = c x subject to a x <= b, x >= 0, some x_i must be integer valued
Make 'b_i' negative to indicate a '>=' constraint.
-
class
IntegerLocalSearch extends AnyRef
The
IntegerLocalSearch
class performs local search to find minima of functions defined on integer vector domains (z^n).The
IntegerLocalSearch
class performs local search to find minima of functions defined on integer vector domains (z^n).minimize f(x) subject to g(x) <= 0, x in Z^n
-
class
IntegerNLP extends AnyRef
This
IntegerNLP
solves Integer Non-Linear Programming (INLP) and Mixed Integer Linear Non-Programming (MINLP) problems recursively using the Simplex algorithm.This
IntegerNLP
solves Integer Non-Linear Programming (INLP) and Mixed Integer Linear Non-Programming (MINLP) problems recursively using the Simplex algorithm. First, an NLP problem is solved. If the optimal solution vector 'x' is entirely integer valued, the INLP is solved. If not, pick the first 'x_j' that is not integer valued. Define two new NLP problems which bound 'x_j' to the integer below and above, respectively. Branch by solving each of these NLP problems in turn. Prune by not exploring branches less optimal than the currently best integer solution. This technique is referred to as Branch and Bound. An exclusion set may be optionally provided for MINLP problems.Given an objective function 'f(x)' and a constraint function 'g(x)', find values for the solution/decision vector 'x' that minimize the objective function 'f(x)', while satisfying the constraint function, i.e.,
minimize f(x) subject to g(x) <= 0, some x_i must integer-valued
Make b_i negative to indicate a ">=" constraint
-
class
IntegerTabuSearch extends AnyRef
The
IntegerTabuSearch
class performs tabu search to find minima of functions defined on integer vector domains 'z^n'. Tabu search will not re-visit points already deemed sub-optimal.The
IntegerTabuSearch
class performs tabu search to find minima of functions defined on integer vector domains 'z^n'. Tabu search will not re-visit points already deemed sub-optimal.minimize f(x) subject to g(x) <= 0, x in Z^n
-
class
L_BFGS_B extends Minimizer
The
L_BFGS_B
the class implements the Limited memory Broyden–Fletcher– Goldfarb–Shanno for Bound constrained optimization (L-BFGS-B) Quasi-Newton Algorithm for solving Non-Linear Programming (NLP) problems.The
L_BFGS_B
the class implements the Limited memory Broyden–Fletcher– Goldfarb–Shanno for Bound constrained optimization (L-BFGS-B) Quasi-Newton Algorithm for solving Non-Linear Programming (NLP) problems. L-BFGS-B determines a search direction by deflecting the steepest descent direction vector (opposite the gradient) by * multiplying it by a matrix that approximates the inverse Hessian. Furthermore, only a few vectors represent the approximation of the Hessian Matrix (limited memory). The parameters estimated are also bounded within user specified lower and upper bounds.minimize f(x) subject to g(x) <= 0 [ optionally g(x) == 0 ]
-
trait
LineSearch extends AnyRef
The
LineSearch
trait specifies the basic methods for Line Search (LS) algorithms in classes extending this trait to implement.The
LineSearch
trait specifies the basic methods for Line Search (LS) algorithms in classes extending this trait to implement. Line search is for one dimensional optimization problems. The algorithms perform line search to find an 'x'-value that minimizes a function 'f' that is passed into an implementing class.x* = argmin f(x)
-
trait
Minimizer extends AnyRef
The
Minimizer
trait sets the pattern for optimization algorithms for solving Non-Linear Programming (NLP) problems of the form:The
Minimizer
trait sets the pattern for optimization algorithms for solving Non-Linear Programming (NLP) problems of the form:minimize f(x) subject to g(x) <= 0 [ optionally g(x) == 0 ]
where f is the objective function to be minimized g is the constraint function to be satisfied, if any
Classes mixing in this trait must implement a function 'fg' that rolls the constraints into the objective functions as penalties for constraint violation, a one-dimensional Line Search (LS) algorithm 'lineSearch' and an iterative method (solve) that searches for improved solutions 'x'-vectors with lower objective function values (f(x)).
-
trait
MinimizerLP extends Error
The
MinimizerLP
trait sets the pattern for optimization algorithms for solving Linear Programming (LP) problems of the form:The
MinimizerLP
trait sets the pattern for optimization algorithms for solving Linear Programming (LP) problems of the form:minimize c x subject to a x <= b, x >= 0
where a is the constraint matrix b is the limit/RHS vector c is the cost vector
Classes mixing in this trait must implement an objective function 'objF' an iterative method (solve) that searches for improved solutions 'x'-vectors with lower objective function values.
-
class
NelderMeadSimplex extends Minimizer with Error
The
NelderMeadSimplex
solves Non-Linear Programming (NLP) problems using the Nelder-Mead Simplex algorithm.The
NelderMeadSimplex
solves Non-Linear Programming (NLP) problems using the Nelder-Mead Simplex algorithm. Given a function 'f' and its dimension 'n', the algorithm moves a simplex defined by n + 1 points in order to find an optimal solution. The algorithm is derivative-free.minimize f(x)
-
class
NewtonRaphson extends AnyRef
The
NewtonRaphson
class is used to find roots (zeros) for a one-dimensional (scalar) function 'g'.The
NewtonRaphson
class is used to find roots (zeros) for a one-dimensional (scalar) function 'g'. Depending on the FunctionSelector, it can find zeros for derivatives or finite differences, which may indicate optima for function 'g'. Also, for optimization, may pass the derivative of the function, since finding zeros for the derivative corresponds to finding optima for the function. -
class
QuadraticSimplex extends Error
The
QuadraticSimplex
class solves Quadratic Programming (QP) problems using the Quadratic Simplex Algorithm.The
QuadraticSimplex
class solves Quadratic Programming (QP) problems using the Quadratic Simplex Algorithm. Given a constraint matrix 'a', constant vector 'b', cost matrix 'q' and cost vector 'c', find values for the solution/decision vector 'x' that minimize the objective function 'f(x)', while satisfying all of the constraints, i.e.,minimize f(x) = 1/2 x q x + c x subject to a x <= b, x >= 0
Creates an 'MM-by-NN' simplex tableau. This implementation is restricted to linear constraints 'a x <= b' and 'q' being a positive semi-definite matrix. Pivoting must now also handle non-linear complementary slackness.
- See also
www.engineering.uiowa.edu/~dbricker/lp_stacks.html
-
class
QuasiNewton extends Minimizer with Error
The
QuasiNewton
the class implements the Broyden–Fletcher–Goldfarb–Shanno (BFGS) Quasi-Newton Algorithm for solving Non-Linear Programming (NLP) problems.The
QuasiNewton
the class implements the Broyden–Fletcher–Goldfarb–Shanno (BFGS) Quasi-Newton Algorithm for solving Non-Linear Programming (NLP) problems. BFGS determines a search direction by deflecting the steepest descent direction vector (opposite the gradient) by * multiplying it by a matrix that approximates the inverse Hessian. Note, this implementation may be set up to work with the matrix 'b' (approximate Hessian) or directly with the 'binv' matrix (the inverse of 'b').minimize f(x) subject to g(x) <= 0 [ optionally g(x) == 0 ]
-
class
RevisedSimplex extends MinimizerLP
The
RevisedSimplex
class solves Linear Programming (LP) problems using the Revised Simplex Algorithm.The
RevisedSimplex
class solves Linear Programming (LP) problems using the Revised Simplex Algorithm. Given a constraint matrix 'a', constant vector 'b' and cost vector 'c', find values for the solution/decision vector 'x' that minimize the objective function 'f(x)', while satisfying all of the constraints, i.e.,minimize f(x) = c x subject to a x <= b, x >= 0
The Revised Simplex Algorithm operates on 'b_inv', which is the inverse of the basis-matrix ('ba' = 'B'). It has benefits over the Simplex Algorithm (less memory and reduced chance of round off errors).
-
class
Simplex extends MinimizerLP
The
Simplex
class solves Linear Programming (LP) problems using a tableau based Simplex Algorithm.The
Simplex
class solves Linear Programming (LP) problems using a tableau based Simplex Algorithm. Given a constraint matrix 'a', limit/RHS vector 'b' and cost vector 'c', find values for the solution/decision vector 'x' that minimize the objective function 'f(x)', while satisfying all of the constraints, i.e.,minimize f(x) = c x subject to a x <= b, x >= 0
In case of 'a_i x >= b_i', use '-b_i' as an indicator of a '>=' constraint. The program will flip such negative b_i back to positive as well as use a surplus variable instead of the usual slack variable, i.e., a_i x <= b_i => a_i x + s_i = b_i // use slack variable s_i with coefficient 1 a_i x >= b_i => a_i x + s_i = b_i // use surplus variable s_i with coefficient -1
Creates an MM-by-NN simplex tableau with -- [0..M-1, 0..N-1] = a (constraint matrix) -- [0..M-1, N..M+N-1] = s (slack/surplus variable matrix) -- [0..M-1, NN-1] = b (limit/RHS vector) -- [M, 0..NN-2] = c (cost vector)
-
class
Simplex2P extends MinimizerLP
The
Simplex2P
class solves Linear Programming (LP) problems using a tableau based Simplex Algorithm.The
Simplex2P
class solves Linear Programming (LP) problems using a tableau based Simplex Algorithm. Given a constraint matrix 'a', limit/RHS vector 'b' and cost vector 'c', find values for the solution/decision vector 'x' that minimize the objective function 'f(x)', while satisfying all of the constraints, i.e.,minimize f(x) = c x subject to a x <= b, x >= 0
In case of 'a_i x >= b_i', use -b_i as an indicator of a ">=" constraint. The program will flip such negative b_i back to positive as well as use a surplus and artificial variable instead of the usual slack variable, i.e., a_i x <= b_i => a_i x + s_i = b_i // use slack variable s_i with coefficient 1 a_i x >= b_i => a_i x + s_i = b_i // use surplus variable s_i with coefficient -1 For each '>=' constraint, an artificial variable is introduced and put into the initial basis. These artificial variables must be removed from the basis during Phase I of the Two-Phase Simplex Algorithm. After this, or if there are no artificial variables, Phase II is used to find an optimal value for 'x' and the optimum value for 'f'.
Creates an 'MM-by-nn' simplex tableau with -- [0..M-1, 0..N-1] = a (constraint matrix) -- [0..M-1, N..M+N-1] = s (slack/surplus variable matrix) -- [0..M-1, M+N..nn-2] = r (artificial variable matrix) -- [0..M-1, nn-1] = b (limit/RHS vector) -- [M, 0..nn-2] = c (cost vector)
-
class
SimplexBG extends MinimizerLP
The
SimplexBG
class solves Linear Programming (LP) problems using the Bartels-Golub (BG) Simplex Algorithm.The
SimplexBG
class solves Linear Programming (LP) problems using the Bartels-Golub (BG) Simplex Algorithm. Given a constraint matrix 'a', constant vector 'b' and cost vector 'c', find values for the solution/decision vector 'x' that minimize the objective function 'f(x)', while satisfying all of the constraints, i.e.,minimize f(x) = c x subject to a x <= b, x >= 0
The BG Simplex Algorithm performs LU Factorization/Decomposition of the basis-matrix ('ba' = 'B') rather than computing inverses ('b_inv'). It has benefits over the (Revised) Simplex Algorithm (less run-time, less memory, and much reduced chance of round off errors).
-
class
SimplexFT extends MinimizerLP
The
SimplexFT
class solves Linear Programming (LP) problems using the Forrest-Tomlin (FT) Simplex Algorithm.The
SimplexFT
class solves Linear Programming (LP) problems using the Forrest-Tomlin (FT) Simplex Algorithm. Given a constraint matrix 'a', constant vector 'b' and cost vector 'c', find values for the solution/decision vector 'x' that minimize the objective function 'f(x)', while satisfying all of the constraints, i.e.,minimize f(x) = c x subject to a x <= b, x >= 0
The FT Simplex Algorithm performs LU Factorization/Decomposition of the basis-matrix ('ba' = 'B') rather than computing inverses ('b_inv'). It has benefits over the (Revised) Simplex Algorithm (less run-time, less memory, and much reduced chance of round off errors).
-
class
StochasticGradient extends Minimizer with Error
The
StochasticGradient
class solves unconstrained Non-Linear Programming (NLP) problems using the Stochastic Gradient Descent algorithm.The
StochasticGradient
class solves unconstrained Non-Linear Programming (NLP) problems using the Stochastic Gradient Descent algorithm. Given a function 'f' and a starting point 'x0', the algorithm computes the gradient and takes steps in the opposite direction. The algorithm iterates until it converges. The algorithm is stochastic in sense that only a single batch is used in each step of the optimimation. Examples (a number of rows) are are chosen for each batch. FIX - provide option to randomly select samples in batch- See also
leon.bottou.org/publications/pdf/compstat-2010.pdf dir_k = -gradient (x) minimize f(x)
-
class
WolfeLS extends LineSearch
The
WolfeLS
class performs an inexact line search on 'f' to find a point 'x' that exhibits (1) sufficient decrease ('f(x)' enough less that 'f(0)') and (2) the slope at x is less steep than the slope at 0.The
WolfeLS
class performs an inexact line search on 'f' to find a point 'x' that exhibits (1) sufficient decrease ('f(x)' enough less that 'f(0)') and (2) the slope at x is less steep than the slope at 0. That is, the line search looks for a value for 'x' satisfying the two Wolfe conditions.f(x) <= f(0) + c1 * f'(0) * x Wolfe condition 1 (Armijo condition) |f'(x)| <= |c2 * f'(0)| Wolfe condition 2 (Strong version) f'(x) >= c2 * f'(0) Wolfe condition 2 (Weak version, more robust)
It works on scalar functions (@see
WolfeLSTest
). If starting with a vector function f(x), simply define a new function g(y) = x0 + direction * y (@seeWolfeLSTest2
).
Value Members
-
val
G_RATIO: Double
the golden ratio (1.618033988749895)
-
val
G_SECTION: Double
the golden section number (0.6180339887498949)
-
object
AugLagrangian
The
AugLagrangian
class implements the Augmented Lagrangian Method for solving equality constrained optimization problems.The
AugLagrangian
class implements the Augmented Lagrangian Method for solving equality constrained optimization problems. Minimize objective function 'f' subject to constraint 'h' to find an optimal solution for 'x'.min f(x) s.t. h(x) = 0
f = objective function h = equality contraint x = solution vector
Note: the hyper-parameters 'eta' and 'p0' will need to be tuned per problem.
- See also
AugLagrangianTest
for how to set up 'f', 'h' and 'grad' functions
-
object
AugLagrangianTest extends App
The
AugLagrangianTest
object tests theAugLagrangian
object using a simple equality constrained optimization problem defined by functions 'f' and 'h'.The
AugLagrangianTest
object tests theAugLagrangian
object using a simple equality constrained optimization problem defined by functions 'f' and 'h'. Caller must also supply the gradient of the Augmented Lagrangian 'grad'. > runMain scalation.minima.AugLagrangianTest -
object
BrentTest extends App
The
BrentTest
object is used to test theBrent
class.The
BrentTest
object is used to test theBrent
class. > runMain scalation.minima.BrentTest -
object
ConjugateGradientTest extends App
The
ConjugateGradientTest
object is used to test theConjugateGradient
class.The
ConjugateGradientTest
object is used to test theConjugateGradient
class. > runMain scalation.minima.ConjugateGradientTest -
object
CoordinateDescentTest extends App
The
CoordinateDescentTest
object is used to test theCoordinateDescent
class.The
CoordinateDescentTest
object is used to test theCoordinateDescent
class. > runMain scalation.minima.CoordinateDescentTest -
object
DualSimplexTest extends App
The
DualSimplexTest
object is used to test theDualSimplex
class. -
object
Ftran
The
Ftran
object ... -
object
FunctionSelector extends Enumeration
The
FunctionSelector
provides an enumeration of function types. -
object
GeneticAlgorithmTest extends App
The
GeneticAlgorithmTest
object is used to test theGeneticAlgorithm
class (unconstrained). -
object
GoldenSectionLSTest extends App
The
GoldenSectionLSTest
object is used to test theGoldenSectionLS
class on scalar functions.The
GoldenSectionLSTest
object is used to test theGoldenSectionLS
class on scalar functions. > runMain scalation.minima.GoldenSectionLSTest -
object
GoldenSectionLSTest2 extends App
The
GoldenSectionLSTest2
object is used to test theGoldenSectionLS
class on vector functions.The
GoldenSectionLSTest2
object is used to test theGoldenSectionLS
class on vector functions. > runMain scalation.minima.GoldenSectionLSTest -
object
GradientDescentTest extends App
The
GradientDescentTest
object is used to test theGradientDescent
class.The
GradientDescentTest
object is used to test theGradientDescent
class. > runMain scalation.minima.GradientDescentTest -
object
GridLSTest extends App
The
GridLSTest
object is used to test theGridLS
class on scalar functions.The
GridLSTest
object is used to test theGridLS
class on scalar functions. > runMain scalation.minima.GridLSTest -
object
GridLSTest2 extends App
The
GridLSTest2
object is used to test theGridLS
class on vector functions.The
GridLSTest2
object is used to test theGridLS
class on vector functions. > runMain scalation.minima.GridLSTest -
object
IntegerGoldenSectionLSTest extends App
The
IntegerGoldenSectionLSTest
object is used to test theIntegerGoldenSectionLS
class on scalar functions. -
object
IntegerLPTest extends App
The
IntegerLPTest
object is used to test theIntegerLP
class.The
IntegerLPTest
object is used to test theIntegerLP
class. real solution x = (.8, 1.6), f = 8.8 integer solution x = (2, 1), f = 10- See also
Linear Programming and Network Flows, Example 6.14
-
object
IntegerLocalSearchTest extends App
The
IntegerLocalSearchTest
object is used to test theIntegerLocalSearch
class (unconstrained). -
object
IntegerLocalSearchTest2 extends App
The
IntegerLocalSearchTest2
object is used to test theIntegerLocalSearch
class (constrained). -
object
IntegerNLPTest extends App
The
IntegerNLPTest
object is used to test theIntegerNLP
class.The
IntegerNLPTest
object is used to test theIntegerNLP
class. real solution x = (.8, 1.6), f = 8.8 integer solution x = (2, 1), f = 10- See also
Linear Programming and Network Flows, Example 6.14
-
object
IntegerTabuSearchTest extends App
The
IntegerTabuSearchTest
object is used to test theIntegerTabuSearch
class (unconstrained). -
object
IntegerTabuSearchTest2 extends App
The
IntegerTabuSearchTest2
object is used to test theIntegerTabuSearch
class (constrained). -
object
L_BFGS_BTest extends App
The
L_BFGS_BTest
object is used to test theL_BFGS_B
class.The
L_BFGS_BTest
object is used to test theL_BFGS_B
class. > runMain scalation.minima.L_BFGS_BTest -
object
LassoAdmm
The
LassoAdmm
class performs LASSO regression using Alternating Direction Method of Multipliers (ADMM).The
LassoAdmm
class performs LASSO regression using Alternating Direction Method of Multipliers (ADMM). Minimize the following objective function to find an optimal solutions for 'x'.argmin_x (1/2)||Ax − b||_2^2 + λ||x||_1
A = data matrix b = response vector λ = weighting on the l_1 penalty x = solution (coefficient vector)
- See also
euler.stat.yale.edu/~tba3/stat612/lectures/lec23/lecture23.pdf
https://web.stanford.edu/~boyd/papers/admm_distr_stats.html
-
object
LassoAdmmTest extends App
The
LassoAdmmTest
object testsLassoAdmm
class using the following regression equation.The
LassoAdmmTest
object testsLassoAdmm
class using the following regression equation.y = b dot x = b_0 + b_1*x_1 + b_2*x_2.
- See also
statmaster.sdu.dk/courses/st111/module03/index.html > runMain scalation.minima.LassoAdmmTest
-
object
LassoAdmmTest2 extends App
The
LassoAdmmTest2
object testsLassoAdmm
class using the following regression equation.The
LassoAdmmTest2
object testsLassoAdmm
class using the following regression equation.y = b dot x = b_0 + b_1*x_1 + b_2*x_2.
- See also
www.cs.jhu.edu/~svitlana/papers/non_refereed/optimization_1.pdf > runMain scalation.minima.LassoAdmmTest2
-
object
NLPTest1 extends App
The
NLPTest1
object used to test several Non-Linear Programming (NLP) algorithms on unconstrained problems.The
NLPTest1
object used to test several Non-Linear Programming (NLP) algorithms on unconstrained problems. Algorithms: 'sdcs' - Gradient Descent with Custom Line Search 'sdgs' - Gradient Descent with Golden Section Line Search 'prcg' - Polak-Ribiere Conjugate Gradient with Golden Section Line Search 'sdws' - Gradient Descent with Wolfe Line Search 'bfgs' - Broyden–Fletcher–Goldfarb–Shanno with Wolfe Line Search -
object
NLPTest2 extends App
The
NLPTest2
object used to test several Non-Linear Programming (NLP) algorithms on constrained problems. -
object
NLPTestCases1 extends App
The
NLPTestCases1
object used to test several Non-Linear Programming (NLP) algorithms on unconstrained problems.The
NLPTestCases1
object used to test several Non-Linear Programming (NLP) algorithms on unconstrained problems. Algorithms: 'sdcs' - Gradient Descent with Custom Line Search 'sdgs' - Gradient Descent with Golden Section Line Search 'prcg' - Polak-Ribiere Conjugate Gradient with Golden Section Line Search 'sdws' - Gradient Descent with Wolfe Line Search 'bfgs' - Broyden–Fletcher–Goldfarb–Shanno with Wolfe Line Search -
object
NLPTestCases2 extends App
The
NLPTestCases2
object used to test several Non-Linear Programming (NLP) algorithms on constrained problems. -
object
NelderMeadSimplexTest extends App
The
NelderMeadSimplexTest
object is used to test theNelderMeadSimplex
class.The
NelderMeadSimplexTest
object is used to test theNelderMeadSimplex
class. > runMain scalation.minima.NelderMeadSimplexTest -
object
NewtonRaphsonTest extends App
The
NewtonRaphsonTest
object is used to test theNewtonRaphson
class.The
NewtonRaphsonTest
object is used to test theNewtonRaphson
class. This test numerically approximates the derivative. > runMain scalation.minima.NewtonRaphsonTest -
object
NewtonRaphsonTest2 extends App
The
NewtonRaphsonTest2
object is used to test theNewtonRaphson
class.The
NewtonRaphsonTest2
object is used to test theNewtonRaphson
class. This test passes in a function for the derivative. > runMain scalation.minima.NewtonRaphsonTest2 -
object
QuadraticSimplexTest extends App
The
QuadraticSimplexTest
object is used to test theQuadraticSimplex
class.The
QuadraticSimplexTest
object is used to test theQuadraticSimplex
class. > runMain scalation.minima.QuadraticSimplexTest -
object
QuasiNewtonTest extends App
The
QuasiNewtonTest
object is used to test theQuasiNewton
class.The
QuasiNewtonTest
object is used to test theQuasiNewton
class. > runMain scalation.minima.QuasiNewtonTest -
object
RevisedSimplexTest extends App
The
RevisedSimplexTest
object is used to test theRevisedSimplex
class.The
RevisedSimplexTest
object is used to test theRevisedSimplex
class. > runMain scalation.minima.RevisedSimplexTest -
object
Simplex2PTest extends App
The
Simplex2PTest
object is used to test theSimplex2P
class. -
object
SimplexBGTest extends App
The
SimplexBGTest
object is used to test theSimplexBG
class. -
object
SimplexFTTest extends App
The
SimplexFT
object is used to test theSimplexFT
class. -
object
SimplexTest extends App
The
SimplexTest
object is used to test theSimplex
class. -
object
StochasticGradientTest extends App
The
StochasticGradientTest
object is used to test theStochasticGradient
class.The
StochasticGradientTest
object is used to test theStochasticGradient
class.- See also
scalation.analytics.RegressionTest3 > runMain scalation.minima.StochasticGradientTest
-
object
WolfeLSTest extends App
The
WolfeLSTest
object is used to test theWolfeLS
class on scalar functions.The
WolfeLSTest
object is used to test theWolfeLS
class on scalar functions. > runMain scalation.minima.WolfeLSTest -
object
WolfeLSTest2 extends App
The
WolfeLSTest2
object is used to test theWolfeLS
class on vector functions.The
WolfeLSTest2
object is used to test theWolfeLS
class on vector functions. > runMain scalation.minima.WolfeLSTest2