Packages

package minima

The minima package contains classes, traits and objects for optimization to find minima.

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. minima
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. Protected

Type Members

  1. class Brent extends AnyRef

    The Brent class is used to find roots (zeros) for a one-dimensional (scalar) function 'g'.

    The Brent class is used to find roots (zeros) for a one-dimensional (scalar) function 'g'. Depending on the FunctionSelector, it can find zeros for derivatives or finite differences, which may indicate optima for function 'g'. The code is directly translated from the following:

    See also

    math.haifa.ac.il/ronn/NA/NAprogs/brent.java

  2. class CheckLP extends Error

    The CheckLP class checks the solution to Linear Programming (LP) problems.

    The CheckLP class checks the solution to Linear Programming (LP) problems. Given a constraint matrix 'a', limit/RHS vector 'b' and cost vector 'c', determine if the values for the solution/decision vector 'x' minimizes the objective function 'f(x)', while satisfying all of the constraints, i.e.,

    minimize f(x) = c x subject to a x <= b, x >= 0

    Check the feasibility and optimality of the solution.

  3. class ConjugateGradient extends Minimizer with Error

    The ConjugateGradient class implements the Polak-Ribiere Conjugate Gradient (PR-CG) Algorithm for solving Non-Linear Programming (NLP) problems.

    The ConjugateGradient class implements the Polak-Ribiere Conjugate Gradient (PR-CG) Algorithm for solving Non-Linear Programming (NLP) problems. PR-CG determines a search direction as a weighted combination of the steepest descent direction (-gradient) and the previous direction. The weighting is set by the beta function, which for this implementation used the Polak-Ribiere technique.

    dir_k = -gradient (x) + beta * dir_k-1

    minimize f(x) subject to g(x) <= 0 [ optionally g(x) == 0 ]

  4. class CoordinateDescent extends Minimizer with Error

    The CoordinateDescent class solves unconstrained Non-Linear Programming (NLP) problems using the Coordinate Descent algorithm.

    The CoordinateDescent class solves unconstrained Non-Linear Programming (NLP) problems using the Coordinate Descent algorithm. Given a function 'f' and a starting point 'x0', the algorithm picks coordinate directions (cyclically) and takes steps in the those directions. The algorithm iterates until it converges.

    dir_k = kth coordinate direction

    minimize f(x)

  5. class DualSimplex extends MinimizerLP

    The DualSimplex class solves Linear Programming (LP) problems using a tableau based Dual Simplex Algorithm.

    The DualSimplex class solves Linear Programming (LP) problems using a tableau based Dual Simplex Algorithm. It is particularly useful when re-optimizing after a constraint has been added. The algorithm starts with an infeasible super-optimal solution and moves toward (primal) feasibility and optimality.

    Given a constraint matrix 'a', limit/RHS vector 'b' and cost vector 'c', find values for the solution/decision vector 'x' that minimize the objective function f(x), while satisfying all of the constraints, i.e.,

    minimize f(x) = c x subject to a x <= b, x >= 0

    Creates an 'MM-by-NN' simplex tableau with -- [0..M-1, 0..N-1] = a (constraint matrix) -- [0..M-1, N..M+N-1] = s (slack/surplus variable matrix) -- [0..M-1, NN-1] = b (limit/RHS vector) -- [M, 0..NN-2] = c (cost vector)

  6. class GeneticAlgorithm extends AnyRef

    The GeneticAlgorithm class performs local search to find minima of functions defined on integer vector domains (z^n).

    The GeneticAlgorithm class performs local search to find minima of functions defined on integer vector domains (z^n).

    minimize f(x) subject to g(x) <= 0, x in Z^n

  7. class GoldenSectionLS extends LineSearch

    The GoldenSectionLS class performs a line search on 'f(x)' to find a minimal value for 'f'.

    The GoldenSectionLS class performs a line search on 'f(x)' to find a minimal value for 'f'. It requires no derivatives and only one functional evaluation per iteration. A search is conducted from 'x1' (often 0) to 'xmax'. A guess for 'xmax' must be given, but can be made larger during the expansion phase, that occurs before the recursive golden section search is called. It works on scalar functions (see GoldenSectionLSTest). If starting with a vector function 'f(x)', simply define a new function 'g(y) = x0 + direction * y' (see GoldenSectionLSTest2).

  8. class GradientDescent extends Minimizer with Error

    The GradientDescent class solves unconstrained Non-Linear Programming (NLP) problems using the Gradient Descent algorithm.

    The GradientDescent class solves unconstrained Non-Linear Programming (NLP) problems using the Gradient Descent algorithm. Given a function 'f' and a starting point 'x0', the algorithm computes the gradient and takes steps in the opposite direction. The algorithm iterates until it converges. The class assumes that partial derivative functions are not available unless explicitly given via the 'setDerivatives' method.

    dir_k = -gradient (x)

    minimize f(x)

  9. class GridLS extends LineSearch

    The GridLS class performs a line search on 'f(x)' to find a minimal value for 'f'.

    The GridLS class performs a line search on 'f(x)' to find a minimal value for 'f'. It requires no derivatives and only one functional evaluation per iteration. A search is conducted from 'x1' (often 0) to 'xmax'. A guess for 'xmax' must be given. It works on scalar functions (see GridLSTest). If starting with a vector function 'f(x)', simply define a new function 'g(y) = x0 + direction * y' (see GridLSTest2).

  10. class IntegerGoldenSectionLS extends AnyRef

    The IntegerGoldenSectionLS class performs a line search on 'f(x)' to find a minimal value for 'f'.

    The IntegerGoldenSectionLS class performs a line search on 'f(x)' to find a minimal value for 'f'. It requires no derivatives and only one functional evaluation per iteration. A search is conducted from 'x1' (often 0) to 'xmax'. A guess for 'xmax' must be given, but can be made larger during the expansion phase, that occurs before the recursive golden section search is called. It works on scalar functions

    See also

    IntegerGoldenSectionLSTest. If starting with a vector function 'f(x)', simply define a new function 'g(y) = x0 + direction * y'.

    IntegerGoldenSectionLSTest2.

  11. class IntegerLP extends AnyRef

    The IntegerLP class solves Integer Linear Programming (ILP) and Mixed Integer Linear Programming (MILP) problems recursively using the Simplex algorithm.

    The IntegerLP class solves Integer Linear Programming (ILP) and Mixed Integer Linear Programming (MILP) problems recursively using the Simplex algorithm. First, an LP problem is solved. If the optimal solution vector x is entirely integer valued, the ILP is solved. If not, pick the first 'x_j' that is not integer valued. Define two new LP problems which bound 'x_j' to the integer below and above, respectively. Branch by solving each of these LP problems in turn. Prune by not exploring branches less optimal than the currently best integer solution. This technique is referred to as Branch and Bound. An exclusion set may be optionally provided for MILP problems. FIX: Use the Dual Simplex Algorithm for better performance.

    Given a constraint matrix 'a', limit/RHS vector 'b' and cost vector 'c', find values for the solution/decision vector 'x' that minimize the objective function 'f(x)', while satisfying all of the constraints, i.e.,

    minimize f(x) = c x subject to a x <= b, x >= 0, some x_i must be integer valued

    Make 'b_i' negative to indicate a '>=' constraint.

  12. class IntegerLocalSearch extends AnyRef

    The IntegerLocalSearch class performs local search to find minima of functions defined on integer vector domains (z^n).

    The IntegerLocalSearch class performs local search to find minima of functions defined on integer vector domains (z^n).

    minimize f(x) subject to g(x) <= 0, x in Z^n

  13. class IntegerNLP extends AnyRef

    This IntegerNLPsolves Integer Non-Linear Programming (INLP) and Mixed Integer Linear Non-Programming (MINLP) problems recursively using the Simplex algorithm.

    This IntegerNLPsolves Integer Non-Linear Programming (INLP) and Mixed Integer Linear Non-Programming (MINLP) problems recursively using the Simplex algorithm. First, an NLP problem is solved. If the optimal solution vector 'x' is entirely integer valued, the INLP is solved. If not, pick the first 'x_j' that is not integer valued. Define two new NLP problems which bound 'x_j' to the integer below and above, respectively. Branch by solving each of these NLP problems in turn. Prune by not exploring branches less optimal than the currently best integer solution. This technique is referred to as Branch and Bound. An exclusion set may be optionally provided for MINLP problems.

    Given an objective function 'f(x)' and a constraint function 'g(x)', find values for the solution/decision vector 'x' that minimize the objective function 'f(x)', while satisfying the constraint function, i.e.,

    minimize f(x) subject to g(x) <= 0, some x_i must integer-valued

    Make b_i negative to indicate a ">=" constraint

  14. class IntegerTabuSearch extends AnyRef

    The IntegerTabuSearch class performs tabu search to find minima of functions defined on integer vector domains 'z^n'. Tabu search will not re-visit points already deemed sub-optimal.

    The IntegerTabuSearch class performs tabu search to find minima of functions defined on integer vector domains 'z^n'. Tabu search will not re-visit points already deemed sub-optimal.

    minimize f(x) subject to g(x) <= 0, x in Z^n

  15. class L_BFGS_B extends Minimizer

    The L_BFGS_B the class implements the Limited memory Broyden–Fletcher– Goldfarb–Shanno for Bound constrained optimization (L-BFGS-B) Quasi-Newton Algorithm for solving Non-Linear Programming (NLP) problems.

    The L_BFGS_B the class implements the Limited memory Broyden–Fletcher– Goldfarb–Shanno for Bound constrained optimization (L-BFGS-B) Quasi-Newton Algorithm for solving Non-Linear Programming (NLP) problems. L-BFGS-B determines a search direction by deflecting the steepest descent direction vector (opposite the gradient) by * multiplying it by a matrix that approximates the inverse Hessian. Furthermore, only a few vectors represent the approximation of the Hessian Matrix (limited memory). The parameters estimated are also bounded within user specified lower and upper bounds.

    minimize f(x) subject to g(x) <= 0 [ optionally g(x) == 0 ]

  16. trait LineSearch extends AnyRef

    The LineSearch trait specifies the basic methods for Line Search (LS) algorithms in classes extending this trait to implement.

    The LineSearch trait specifies the basic methods for Line Search (LS) algorithms in classes extending this trait to implement. Line search is for one dimensional optimization problems. The algorithms perform line search to find an 'x'-value that minimizes a function 'f' that is passed into an implementing class.

    x* = argmin f(x)

  17. trait Minimizer extends AnyRef

    The Minimizer trait sets the pattern for optimization algorithms for solving Non-Linear Programming (NLP) problems of the form:

    The Minimizer trait sets the pattern for optimization algorithms for solving Non-Linear Programming (NLP) problems of the form:

    minimize f(x) subject to g(x) <= 0 [ optionally g(x) == 0 ]

    where f is the objective function to be minimized g is the constraint function to be satisfied, if any

    Classes mixing in this trait must implement a function 'fg' that rolls the constraints into the objective functions as penalties for constraint violation, a one-dimensional Line Search (LS) algorithm 'lineSearch' and an iterative method (solve) that searches for improved solutions 'x'-vectors with lower objective function values (f(x)).

  18. trait MinimizerLP extends Error

    The MinimizerLP trait sets the pattern for optimization algorithms for solving Linear Programming (LP) problems of the form:

    The MinimizerLP trait sets the pattern for optimization algorithms for solving Linear Programming (LP) problems of the form:

    minimize c x subject to a x <= b, x >= 0

    where a is the constraint matrix b is the limit/RHS vector c is the cost vector

    Classes mixing in this trait must implement an objective function 'objF' an iterative method (solve) that searches for improved solutions 'x'-vectors with lower objective function values.

  19. class NelderMeadSimplex extends Minimizer with Error

    The NelderMeadSimplex solves Non-Linear Programming (NLP) problems using the Nelder-Mead Simplex algorithm.

    The NelderMeadSimplex solves Non-Linear Programming (NLP) problems using the Nelder-Mead Simplex algorithm. Given a function 'f' and its dimension 'n', the algorithm moves a simplex defined by n + 1 points in order to find an optimal solution. The algorithm is derivative-free.

    minimize f(x)

  20. class NewtonRaphson extends AnyRef

    The NewtonRaphson class is used to find roots (zeros) for a one-dimensional (scalar) function 'g'.

    The NewtonRaphson class is used to find roots (zeros) for a one-dimensional (scalar) function 'g'. Depending on the FunctionSelector, it can find zeros for derivatives or finite differences, which may indicate optima for function 'g'. Also, for optimization, may pass the derivative of the function, since finding zeros for the derivative corresponds to finding optima for the function.

  21. class QuadraticSimplex extends Error

    The QuadraticSimplex class solves Quadratic Programming (QP) problems using the Quadratic Simplex Algorithm.

    The QuadraticSimplex class solves Quadratic Programming (QP) problems using the Quadratic Simplex Algorithm. Given a constraint matrix 'a', constant vector 'b', cost matrix 'q' and cost vector 'c', find values for the solution/decision vector 'x' that minimize the objective function 'f(x)', while satisfying all of the constraints, i.e.,

    minimize f(x) = 1/2 x q x + c x subject to a x <= b, x >= 0

    Creates an 'MM-by-NN' simplex tableau. This implementation is restricted to linear constraints 'a x <= b' and 'q' being a positive semi-definite matrix. Pivoting must now also handle non-linear complementary slackness.

    See also

    www.engineering.uiowa.edu/~dbricker/lp_stacks.html

  22. class QuasiNewton extends Minimizer with Error

    The QuasiNewton the class implements the Broyden–Fletcher–Goldfarb–Shanno (BFGS) Quasi-Newton Algorithm for solving Non-Linear Programming (NLP) problems.

    The QuasiNewton the class implements the Broyden–Fletcher–Goldfarb–Shanno (BFGS) Quasi-Newton Algorithm for solving Non-Linear Programming (NLP) problems. BFGS determines a search direction by deflecting the steepest descent direction vector (opposite the gradient) by * multiplying it by a matrix that approximates the inverse Hessian. Note, this implementation may be set up to work with the matrix 'b' (approximate Hessian) or directly with the 'binv' matrix (the inverse of 'b').

    minimize f(x) subject to g(x) <= 0 [ optionally g(x) == 0 ]

  23. class RevisedSimplex extends MinimizerLP

    The RevisedSimplex class solves Linear Programming (LP) problems using the Revised Simplex Algorithm.

    The RevisedSimplex class solves Linear Programming (LP) problems using the Revised Simplex Algorithm. Given a constraint matrix 'a', constant vector 'b' and cost vector 'c', find values for the solution/decision vector 'x' that minimize the objective function 'f(x)', while satisfying all of the constraints, i.e.,

    minimize f(x) = c x subject to a x <= b, x >= 0

    The Revised Simplex Algorithm operates on 'b_inv', which is the inverse of the basis-matrix ('ba' = 'B'). It has benefits over the Simplex Algorithm (less memory and reduced chance of round off errors).

  24. class Simplex extends MinimizerLP

    The Simplex class solves Linear Programming (LP) problems using a tableau based Simplex Algorithm.

    The Simplex class solves Linear Programming (LP) problems using a tableau based Simplex Algorithm. Given a constraint matrix 'a', limit/RHS vector 'b' and cost vector 'c', find values for the solution/decision vector 'x' that minimize the objective function 'f(x)', while satisfying all of the constraints, i.e.,

    minimize f(x) = c x subject to a x <= b, x >= 0

    In case of 'a_i x >= b_i', use '-b_i' as an indicator of a '>=' constraint. The program will flip such negative b_i back to positive as well as use a surplus variable instead of the usual slack variable, i.e., a_i x <= b_i => a_i x + s_i = b_i // use slack variable s_i with coefficient 1 a_i x >= b_i => a_i x + s_i = b_i // use surplus variable s_i with coefficient -1

    Creates an MM-by-NN simplex tableau with -- [0..M-1, 0..N-1] = a (constraint matrix) -- [0..M-1, N..M+N-1] = s (slack/surplus variable matrix) -- [0..M-1, NN-1] = b (limit/RHS vector) -- [M, 0..NN-2] = c (cost vector)

  25. class Simplex2P extends MinimizerLP

    The Simplex2P class solves Linear Programming (LP) problems using a tableau based Simplex Algorithm.

    The Simplex2P class solves Linear Programming (LP) problems using a tableau based Simplex Algorithm. Given a constraint matrix 'a', limit/RHS vector 'b' and cost vector 'c', find values for the solution/decision vector 'x' that minimize the objective function 'f(x)', while satisfying all of the constraints, i.e.,

    minimize f(x) = c x subject to a x <= b, x >= 0

    In case of 'a_i x >= b_i', use -b_i as an indicator of a ">=" constraint. The program will flip such negative b_i back to positive as well as use a surplus and artificial variable instead of the usual slack variable, i.e., a_i x <= b_i => a_i x + s_i = b_i // use slack variable s_i with coefficient 1 a_i x >= b_i => a_i x + s_i = b_i // use surplus variable s_i with coefficient -1 For each '>=' constraint, an artificial variable is introduced and put into the initial basis. These artificial variables must be removed from the basis during Phase I of the Two-Phase Simplex Algorithm. After this, or if there are no artificial variables, Phase II is used to find an optimal value for 'x' and the optimum value for 'f'.

    Creates an 'MM-by-nn' simplex tableau with -- [0..M-1, 0..N-1] = a (constraint matrix) -- [0..M-1, N..M+N-1] = s (slack/surplus variable matrix) -- [0..M-1, M+N..nn-2] = r (artificial variable matrix) -- [0..M-1, nn-1] = b (limit/RHS vector) -- [M, 0..nn-2] = c (cost vector)

  26. class SimplexBG extends MinimizerLP

    The SimplexBG class solves Linear Programming (LP) problems using the Bartels-Golub (BG) Simplex Algorithm.

    The SimplexBG class solves Linear Programming (LP) problems using the Bartels-Golub (BG) Simplex Algorithm. Given a constraint matrix 'a', constant vector 'b' and cost vector 'c', find values for the solution/decision vector 'x' that minimize the objective function 'f(x)', while satisfying all of the constraints, i.e.,

    minimize f(x) = c x subject to a x <= b, x >= 0

    The BG Simplex Algorithm performs LU Factorization/Decomposition of the basis-matrix ('ba' = 'B') rather than computing inverses ('b_inv'). It has benefits over the (Revised) Simplex Algorithm (less run-time, less memory, and much reduced chance of round off errors).

  27. class SimplexFT extends MinimizerLP

    The SimplexFT class solves Linear Programming (LP) problems using the Forrest-Tomlin (FT) Simplex Algorithm.

    The SimplexFT class solves Linear Programming (LP) problems using the Forrest-Tomlin (FT) Simplex Algorithm. Given a constraint matrix 'a', constant vector 'b' and cost vector 'c', find values for the solution/decision vector 'x' that minimize the objective function 'f(x)', while satisfying all of the constraints, i.e.,

    minimize f(x) = c x subject to a x <= b, x >= 0

    The FT Simplex Algorithm performs LU Factorization/Decomposition of the basis-matrix ('ba' = 'B') rather than computing inverses ('b_inv'). It has benefits over the (Revised) Simplex Algorithm (less run-time, less memory, and much reduced chance of round off errors).

  28. class StochasticGradient extends Minimizer with Error

    The StochasticGradient class solves unconstrained Non-Linear Programming (NLP) problems using the Stochastic Gradient Descent algorithm.

    The StochasticGradient class solves unconstrained Non-Linear Programming (NLP) problems using the Stochastic Gradient Descent algorithm. Given a function 'f' and a starting point 'x0', the algorithm computes the gradient and takes steps in the opposite direction. The algorithm iterates until it converges. The algorithm is stochastic in sense that only a single batch is used in each step of the optimimation. Examples (a number of rows) are are chosen for each batch. FIX - provide option to randomly select samples in batch

    See also

    leon.bottou.org/publications/pdf/compstat-2010.pdf dir_k = -gradient (x) minimize f(x)

  29. class WolfeLS extends LineSearch

    The WolfeLS class performs an inexact line search on 'f' to find a point 'x' that exhibits (1) sufficient decrease ('f(x)' enough less that 'f(0)') and (2) the slope at x is less steep than the slope at 0.

    The WolfeLS class performs an inexact line search on 'f' to find a point 'x' that exhibits (1) sufficient decrease ('f(x)' enough less that 'f(0)') and (2) the slope at x is less steep than the slope at 0. That is, the line search looks for a value for 'x' satisfying the two Wolfe conditions.

    f(x) <= f(0) + c1 * f'(0) * x Wolfe condition 1 (Armijo condition) |f'(x)| <= |c2 * f'(0)| Wolfe condition 2 (Strong version) f'(x) >= c2 * f'(0) Wolfe condition 2 (Weak version, more robust)

    It works on scalar functions (@see WolfeLSTest). If starting with a vector function f(x), simply define a new function g(y) = x0 + direction * y (@see WolfeLSTest2).

Value Members

  1. val G_RATIO: Double

    the golden ratio (1.618033988749895)

  2. val G_SECTION: Double

    the golden section number (0.6180339887498949)

  3. object AugLagrangian

    The AugLagrangian class implements the Augmented Lagrangian Method for solving equality constrained optimization problems.

    The AugLagrangian class implements the Augmented Lagrangian Method for solving equality constrained optimization problems. Minimize objective function 'f' subject to constraint 'h' to find an optimal solution for 'x'.

    min f(x) s.t. h(x) = 0

    f = objective function h = equality contraint x = solution vector

    Note: the hyper-parameters 'eta' and 'p0' will need to be tuned per problem.

    See also

    AugLagrangianTest for how to set up 'f', 'h' and 'grad' functions

  4. object AugLagrangianTest extends App

    The AugLagrangianTest object tests the AugLagrangian object using a simple equality constrained optimization problem defined by functions 'f' and 'h'.

    The AugLagrangianTest object tests the AugLagrangian object using a simple equality constrained optimization problem defined by functions 'f' and 'h'. Caller must also supply the gradient of the Augmented Lagrangian 'grad'. > runMain scalation.minima.AugLagrangianTest

  5. object BrentTest extends App

    The BrentTest object is used to test the Brent class.

    The BrentTest object is used to test the Brent class. > runMain scalation.minima.BrentTest

  6. object ConjugateGradientTest extends App

    The ConjugateGradientTest object is used to test the ConjugateGradient class.

    The ConjugateGradientTest object is used to test the ConjugateGradient class. > runMain scalation.minima.ConjugateGradientTest

  7. object CoordinateDescentTest extends App

    The CoordinateDescentTest object is used to test the CoordinateDescent class.

    The CoordinateDescentTest object is used to test the CoordinateDescent class. > runMain scalation.minima.CoordinateDescentTest

  8. object DualSimplexTest extends App

    The DualSimplexTest object is used to test the DualSimplex class.

  9. object Ftran

    The Ftran object ...

  10. object FunctionSelector extends Enumeration

    The FunctionSelector provides an enumeration of function types.

  11. object GeneticAlgorithmTest extends App

    The GeneticAlgorithmTest object is used to test the GeneticAlgorithm class (unconstrained).

  12. object GoldenSectionLSTest extends App

    The GoldenSectionLSTest object is used to test the GoldenSectionLS class on scalar functions.

    The GoldenSectionLSTest object is used to test the GoldenSectionLS class on scalar functions. > runMain scalation.minima.GoldenSectionLSTest

  13. object GoldenSectionLSTest2 extends App

    The GoldenSectionLSTest2 object is used to test the GoldenSectionLS class on vector functions.

    The GoldenSectionLSTest2 object is used to test the GoldenSectionLS class on vector functions. > runMain scalation.minima.GoldenSectionLSTest

  14. object GradientDescentTest extends App

    The GradientDescentTest object is used to test the GradientDescent class.

    The GradientDescentTest object is used to test the GradientDescent class. > runMain scalation.minima.GradientDescentTest

  15. object GridLSTest extends App

    The GridLSTest object is used to test the GridLS class on scalar functions.

    The GridLSTest object is used to test the GridLS class on scalar functions. > runMain scalation.minima.GridLSTest

  16. object GridLSTest2 extends App

    The GridLSTest2 object is used to test the GridLS class on vector functions.

    The GridLSTest2 object is used to test the GridLS class on vector functions. > runMain scalation.minima.GridLSTest

  17. object IntegerGoldenSectionLSTest extends App

    The IntegerGoldenSectionLSTest object is used to test the IntegerGoldenSectionLS class on scalar functions.

  18. object IntegerLPTest extends App

    The IntegerLPTest object is used to test the IntegerLP class.

    The IntegerLPTest object is used to test the IntegerLP class. real solution x = (.8, 1.6), f = 8.8 integer solution x = (2, 1), f = 10

    See also

    Linear Programming and Network Flows, Example 6.14

  19. object IntegerLocalSearchTest extends App

    The IntegerLocalSearchTest object is used to test the IntegerLocalSearch class (unconstrained).

  20. object IntegerLocalSearchTest2 extends App

    The IntegerLocalSearchTest2 object is used to test the IntegerLocalSearch class (constrained).

  21. object IntegerNLPTest extends App

    The IntegerNLPTest object is used to test the IntegerNLP class.

    The IntegerNLPTest object is used to test the IntegerNLP class. real solution x = (.8, 1.6), f = 8.8 integer solution x = (2, 1), f = 10

    See also

    Linear Programming and Network Flows, Example 6.14

  22. object IntegerTabuSearchTest extends App

    The IntegerTabuSearchTest object is used to test the IntegerTabuSearch class (unconstrained).

  23. object IntegerTabuSearchTest2 extends App

    The IntegerTabuSearchTest2 object is used to test the IntegerTabuSearch class (constrained).

  24. object L_BFGS_BTest extends App

    The L_BFGS_BTest object is used to test the L_BFGS_B class.

    The L_BFGS_BTest object is used to test the L_BFGS_B class. > runMain scalation.minima.L_BFGS_BTest

  25. object LassoAdmm

    The LassoAdmm class performs LASSO regression using Alternating Direction Method of Multipliers (ADMM).

    The LassoAdmm class performs LASSO regression using Alternating Direction Method of Multipliers (ADMM). Minimize the following objective function to find an optimal solutions for 'x'.

    argmin_x (1/2)||Ax − b||_2^2 + λ||x||_1

    A = data matrix b = response vector λ = weighting on the l_1 penalty x = solution (coefficient vector)

    See also

    euler.stat.yale.edu/~tba3/stat612/lectures/lec23/lecture23.pdf

    https://web.stanford.edu/~boyd/papers/admm_distr_stats.html

  26. object LassoAdmmTest extends App

    The LassoAdmmTest object tests LassoAdmm class using the following regression equation.

    The LassoAdmmTest object tests LassoAdmm class using the following regression equation.

    y = b dot x = b_0 + b_1*x_1 + b_2*x_2.

    See also

    statmaster.sdu.dk/courses/st111/module03/index.html > runMain scalation.minima.LassoAdmmTest

  27. object LassoAdmmTest2 extends App

    The LassoAdmmTest2 object tests LassoAdmm class using the following regression equation.

    The LassoAdmmTest2 object tests LassoAdmm class using the following regression equation.

    y = b dot x = b_0 + b_1*x_1 + b_2*x_2.

    See also

    www.cs.jhu.edu/~svitlana/papers/non_refereed/optimization_1.pdf > runMain scalation.minima.LassoAdmmTest2

  28. object NLPTest1 extends App

    The NLPTest1 object used to test several Non-Linear Programming (NLP) algorithms on unconstrained problems.

    The NLPTest1 object used to test several Non-Linear Programming (NLP) algorithms on unconstrained problems. Algorithms: 'sdcs' - Gradient Descent with Custom Line Search 'sdgs' - Gradient Descent with Golden Section Line Search 'prcg' - Polak-Ribiere Conjugate Gradient with Golden Section Line Search 'sdws' - Gradient Descent with Wolfe Line Search 'bfgs' - Broyden–Fletcher–Goldfarb–Shanno with Wolfe Line Search

  29. object NLPTest2 extends App

    The NLPTest2 object used to test several Non-Linear Programming (NLP) algorithms on constrained problems.

  30. object NLPTestCases1 extends App

    The NLPTestCases1 object used to test several Non-Linear Programming (NLP) algorithms on unconstrained problems.

    The NLPTestCases1 object used to test several Non-Linear Programming (NLP) algorithms on unconstrained problems. Algorithms: 'sdcs' - Gradient Descent with Custom Line Search 'sdgs' - Gradient Descent with Golden Section Line Search 'prcg' - Polak-Ribiere Conjugate Gradient with Golden Section Line Search 'sdws' - Gradient Descent with Wolfe Line Search 'bfgs' - Broyden–Fletcher–Goldfarb–Shanno with Wolfe Line Search

  31. object NLPTestCases2 extends App

    The NLPTestCases2 object used to test several Non-Linear Programming (NLP) algorithms on constrained problems.

  32. object NelderMeadSimplexTest extends App

    The NelderMeadSimplexTest object is used to test the NelderMeadSimplex class.

    The NelderMeadSimplexTest object is used to test the NelderMeadSimplex class. > runMain scalation.minima.NelderMeadSimplexTest

  33. object NewtonRaphsonTest extends App

    The NewtonRaphsonTest object is used to test the NewtonRaphson class.

    The NewtonRaphsonTest object is used to test the NewtonRaphson class. This test numerically approximates the derivative. > runMain scalation.minima.NewtonRaphsonTest

  34. object NewtonRaphsonTest2 extends App

    The NewtonRaphsonTest2 object is used to test the NewtonRaphson class.

    The NewtonRaphsonTest2 object is used to test the NewtonRaphson class. This test passes in a function for the derivative. > runMain scalation.minima.NewtonRaphsonTest2

  35. object QuadraticSimplexTest extends App

    The QuadraticSimplexTest object is used to test the QuadraticSimplex class.

    The QuadraticSimplexTest object is used to test the QuadraticSimplex class. > runMain scalation.minima.QuadraticSimplexTest

  36. object QuasiNewtonTest extends App

    The QuasiNewtonTest object is used to test the QuasiNewton class.

    The QuasiNewtonTest object is used to test the QuasiNewton class. > runMain scalation.minima.QuasiNewtonTest

  37. object RevisedSimplexTest extends App

    The RevisedSimplexTest object is used to test the RevisedSimplex class.

    The RevisedSimplexTest object is used to test the RevisedSimplex class. > runMain scalation.minima.RevisedSimplexTest

  38. object Simplex2PTest extends App

    The Simplex2PTest object is used to test the Simplex2P class.

  39. object SimplexBGTest extends App

    The SimplexBGTest object is used to test the SimplexBG class.

  40. object SimplexFTTest extends App

    The SimplexFT object is used to test the SimplexFT class.

  41. object SimplexTest extends App

    The SimplexTest object is used to test the Simplex class.

  42. object StochasticGradientTest extends App

    The StochasticGradientTest object is used to test the StochasticGradient class.

    The StochasticGradientTest object is used to test the StochasticGradient class.

    See also

    scalation.analytics.RegressionTest3 > runMain scalation.minima.StochasticGradientTest

  43. object WolfeLSTest extends App

    The WolfeLSTest object is used to test the WolfeLS class on scalar functions.

    The WolfeLSTest object is used to test the WolfeLS class on scalar functions. > runMain scalation.minima.WolfeLSTest

  44. object WolfeLSTest2 extends App

    The WolfeLSTest2 object is used to test the WolfeLS class on vector functions.

    The WolfeLSTest2 object is used to test the WolfeLS class on vector functions. > runMain scalation.minima.WolfeLSTest2

Inherited from AnyRef

Inherited from Any

Ungrouped