Packages

p

scalation

minima

package minima

The minima package contains classes, traits and objects for optimization to find minima.

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. minima
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Type Members

  1. class Brent extends AnyRef

    The Brent class is used to find roots (zeros) for a one-dimensional (scalar) function 'g'.

    The Brent class is used to find roots (zeros) for a one-dimensional (scalar) function 'g'. Depending on the FunctionSelector, it can find zeros for derivatives or finite differences, which may indicate optima for function 'g'. The code is directly translated from the following:

    See also

    math.haifa.ac.il/ronn/NA/NAprogs/brent.java

  2. class CheckLP extends Error

    The CheckLP class checks the solution to Linear Programming (LP) problems.

    The CheckLP class checks the solution to Linear Programming (LP) problems. Given a constraint matrix 'a', limit/RHS vector 'b' and cost vector 'c', determine if the values for the solution/decision vector 'x' minimizes the objective function 'f(x)', while satisfying all of the constraints, i.e.,

    minimize f(x) = c x subject to a x <= b, x >= 0

    Check the feasibility and optimality of the solution.

  3. class ConjugateGradient extends Minimizer with Error

    The ConjugateGradient class implements the Polak-Ribiere Conjugate Gradient (PR-CG) Algorithm for solving Non-Linear Programming (NLP) problems.

    The ConjugateGradient class implements the Polak-Ribiere Conjugate Gradient (PR-CG) Algorithm for solving Non-Linear Programming (NLP) problems. PR-CG determines a search direction as a weighted combination of the steepest descent direction (-gradient) and the previous direction. The weighting is set by the beta function, which for this implementation used the Polak-Ribiere technique.

    dir_k = -gradient (x) + beta * dir_k-1

    minimize f(x) subject to g(x) <= 0 [ optionally g(x) == 0 ]

  4. class CoordinateDescent extends Minimizer with Error

    The CoordinateDescent class solves unconstrained Non-Linear Programming (NLP) problems using the Coordinate Descent algorithm.

    The CoordinateDescent class solves unconstrained Non-Linear Programming (NLP) problems using the Coordinate Descent algorithm. Given a function 'f' and a starting point 'x0', the algorithm picks coordinate directions (cyclically) and takes steps in the those directions. The algorithm iterates until it converges.

    dir_k = kth coordinate direction

    minimize f(x)

  5. class DualSimplex extends MinimizerLP

    The DualSimplex class solves Linear Programming (LP) problems using a tableau based Dual Simplex Algorithm.

    The DualSimplex class solves Linear Programming (LP) problems using a tableau based Dual Simplex Algorithm. It is particularly useful when re-optimizing after a constraint has been added. The algorithm starts with an infeasible super-optimal solution and moves toward (primal) feasibility and optimality.

    Given a constraint matrix 'a', limit/RHS vector 'b' and cost vector 'c', find values for the solution/decision vector 'x' that minimize the objective function f(x), while satisfying all of the constraints, i.e.,

    minimize f(x) = c x subject to a x <= b, x >= 0

    Creates an 'MM-by-NN' simplex tableau with -- [0..M-1, 0..N-1] = a (constraint matrix) -- [0..M-1, N..M+N-1] = s (slack/surplus variable matrix) -- [0..M-1, NN-1] = b (limit/RHS vector) -- [M, 0..NN-2] = c (cost vector)

  6. class GeneticAlgorithm extends AnyRef

    The GeneticAlgorithm class performs local search to find minima of functions defined on integer vector domains (z^n).

    The GeneticAlgorithm class performs local search to find minima of functions defined on integer vector domains (z^n).

    minimize f(x) subject to g(x) <= 0, x in Z^n

  7. class GoldenSectionLS extends LineSearch

    The GoldenSectionLS class performs a line search on 'f(x)' to find a minimal value for 'f'.

    The GoldenSectionLS class performs a line search on 'f(x)' to find a minimal value for 'f'. It requires no derivatives and only one functional evaluation per iteration. A search is conducted from 'x1' (often 0) to 'xmax'. A guess for 'xmax' must be given, but can be made larger during the expansion phase, that occurs before the recursive golden section search is called. It works on scalar functions (see GoldenSectionLSTest). If starting with a vector function 'f(x)', simply define a new function 'g(y) = x0 + direction * y' (see GoldenSectionLSTest2).

  8. class GradientDescent extends Minimizer with Error

    The GradientDescent class solves unconstrained Non-Linear Programming (NLP) problems using the Gradient Descent algorithm.

    The GradientDescent class solves unconstrained Non-Linear Programming (NLP) problems using the Gradient Descent algorithm. Given a function 'f' and a starting point 'x0', the algorithm computes the gradient and takes steps in the opposite direction. The algorithm iterates until it converges. The class assumes that partial derivative functions are not available unless explicitly given via the 'setDerivatives' method.

    dir_k = -gradient (x)

    minimize f(x)

  9. class IntegerGoldenSectionLS extends AnyRef

    The IntegerGoldenSectionLS class performs a line search on 'f(x)' to find a minimal value for 'f'.

    The IntegerGoldenSectionLS class performs a line search on 'f(x)' to find a minimal value for 'f'. It requires no derivatives and only one functional evaluation per iteration. A search is conducted from 'x1' (often 0) to 'xmax'. A guess for 'xmax' must be given, but can be made larger during the expansion phase, that occurs before the recursive golden section search is called. It works on scalar functions

    See also

    IntegerGoldenSectionLSTest2.

    IntegerGoldenSectionLSTest. If starting with a vector function 'f(x)', simply define a new function 'g(y) = x0 + direction * y'.

  10. class IntegerLP extends AnyRef

    The IntegerLP class solves Integer Linear Programming (ILP) and Mixed Integer Linear Programming (MILP) problems recursively using the Simplex algorithm.

    The IntegerLP class solves Integer Linear Programming (ILP) and Mixed Integer Linear Programming (MILP) problems recursively using the Simplex algorithm. First, an LP problem is solved. If the optimal solution vector x is entirely integer valued, the ILP is solved. If not, pick the first 'x_j' that is not integer valued. Define two new LP problems which bound 'x_j' to the integer below and above, respectively. Branch by solving each of these LP problems in turn. Prune by not exploring branches less optimal than the currently best integer solution. This technique is referred to as Branch and Bound. An exclusion set may be optionally provided for MILP problems. FIX: Use the Dual Simplex Algorithm for better performance.

    Given a constraint matrix 'a', limit/RHS vector 'b' and cost vector 'c', find values for the solution/decision vector 'x' that minimize the objective function 'f(x)', while satisfying all of the constraints, i.e.,

    minimize f(x) = c x subject to a x <= b, x >= 0, some x_i must be integer valued

    Make 'b_i' negative to indicate a '>=' constraint.

  11. class IntegerLocalSearch extends AnyRef

    The IntegerLocalSearch class performs local search to find minima of functions defined on integer vector domains (z^n).

    The IntegerLocalSearch class performs local search to find minima of functions defined on integer vector domains (z^n).

    minimize f(x) subject to g(x) <= 0, x in Z^n

  12. class IntegerNLP extends AnyRef

    This IntegerNLPsolves Integer Non-Linear Programming (INLP) and Mixed Integer Linear Non-Programming (MINLP) problems recursively using the Simplex algorithm.

    This IntegerNLPsolves Integer Non-Linear Programming (INLP) and Mixed Integer Linear Non-Programming (MINLP) problems recursively using the Simplex algorithm. First, an NLP problem is solved. If the optimal solution vector 'x' is entirely integer valued, the INLP is solved. If not, pick the first 'x_j' that is not integer valued. Define two new NLP problems which bound 'x_j' to the integer below and above, respectively. Branch by solving each of these NLP problems in turn. Prune by not exploring branches less optimal than the currently best integer solution. This technique is referred to as Branch and Bound. An exclusion set may be optionally provided for MINLP problems.

    Given an objective function 'f(x)' and a constraint function 'g(x)', find values for the solution/decision vector 'x' that minimize the objective function 'f(x)', while satisfying the constraint function, i.e.,

    minimize f(x) subject to g(x) <= 0, some x_i must integer-valued

    Make b_i negative to indicate a ">=" constraint

  13. class IntegerTabuSearch extends AnyRef

    The IntegerTabuSearch class performs tabu search to find minima of functions defined on integer vector domains 'z^n'. Tabu search will not re-visit points already deemed sub-optimal.

    The IntegerTabuSearch class performs tabu search to find minima of functions defined on integer vector domains 'z^n'. Tabu search will not re-visit points already deemed sub-optimal.

    minimize f(x) subject to g(x) <= 0, x in Z^n

  14. class L_BFGS_B extends Minimizer

    The L_BFGS_B the class implements the Limited memory Broyden–Fletcher– Goldfarb–Shanno for Bound constrained optimization (L-BFGS-B) Quasi-Newton Algorithm for solving Non-Linear Programming (NLP) problems.

    The L_BFGS_B the class implements the Limited memory Broyden–Fletcher– Goldfarb–Shanno for Bound constrained optimization (L-BFGS-B) Quasi-Newton Algorithm for solving Non-Linear Programming (NLP) problems. L-BFGS-B determines a search direction by deflecting the steepest descent direction vector (opposite the gradient) by * multiplying it by a matrix that approximates the inverse Hessian. Furthermore, only a few vectors represent the approximation of the Hessian Matrix (limited memory). The parameters estimated are also bounded within user specified lower and upper bounds.

    minimize f(x) subject to g(x) <= 0 [ optionally g(x) == 0 ]

  15. trait LineSearch extends AnyRef

    This trait specifies the pattern for Line Search (LS) algorithms that perform line search on f(x) to find an x-value that minimizes a function f.

  16. trait Minimizer extends AnyRef

    The Minimizer trait sets the pattern for optimization algorithms for solving Non-Linear Programming (NLP) problems of the form:

    The Minimizer trait sets the pattern for optimization algorithms for solving Non-Linear Programming (NLP) problems of the form:

    minimize f(x) subject to g(x) <= 0 [ optionally g(x) == 0 ]

    where f is the objective function to be minimized g is the constraint function to be satisfied, if any

    Classes mixing in this trait must implement a function 'fg' that rolls the constraints into the objective functions as penalties for constraint violation, a one-dimensional Line Search (LS) algorithm 'lineSearch' and an iterative method (solve) that searches for improved solutions 'x'-vectors with lower objective function values (f(x)).

  17. trait MinimizerLP extends Error

    The MinimizerLP trait sets the pattern for optimization algorithms for solving Linear Programming (LP) problems of the form:

    The MinimizerLP trait sets the pattern for optimization algorithms for solving Linear Programming (LP) problems of the form:

    minimize c x subject to a x <= b, x >= 0

    where a is the constraint matrix b is the limit/RHS vector c is the cost vector

    Classes mixing in this trait must implement an objective function 'objF' an iterative method (solve) that searches for improved solutions 'x'-vectors with lower objective function values.

  18. class NelderMeadSimplex extends Minimizer with Error

    The NelderMeadSimplex solves Non-Linear Programming (NLP) problems using the Nelder-Mead Simplex algorithm.

    The NelderMeadSimplex solves Non-Linear Programming (NLP) problems using the Nelder-Mead Simplex algorithm. Given a function 'f' and its dimension 'n', the algorithm moves a simplex defined by n + 1 points in order to find an optimal solution. The algorithm is derivative-free.

    minimize f(x)

  19. class NewtonRaphson extends AnyRef

    The NewtonRaphson class is used to find roots (zeros) for a one-dimensional (scalar) function 'g'.

    The NewtonRaphson class is used to find roots (zeros) for a one-dimensional (scalar) function 'g'. Depending on the FunctionSelector, it can find zeros for derivatives or finite differences, which may indicate optima for function 'g'.

  20. class QuadraticSimplex extends Error

    The QuadraticSimplex class solves Quadratic Programming (QP) problems using the Quadratic Simplex Algorithm.

    The QuadraticSimplex class solves Quadratic Programming (QP) problems using the Quadratic Simplex Algorithm. Given a constraint matrix 'a', constant vector 'b', cost matrix 'q' and cost vector 'c', find values for the solution/decision vector 'x' that minimize the objective function 'f(x)', while satisfying all of the constraints, i.e.,

    minimize f(x) = 1/2 x q x + c x subject to a x <= b, x >= 0

    Creates an 'MM-by-NN' simplex tableau. This implementation is restricted to linear constraints 'a x <= b' and 'q' being a positive semi-definite matrix. Pivoting must now also handle non-linear complementary slackness.

    See also

    www.engineering.uiowa.edu/~dbricker/lp_stacks.html

  21. class QuasiNewton extends Minimizer with Error

    The QuasiNewton the class implements the Broyden–Fletcher–Goldfarb–Shanno (BFGS) Quasi-Newton Algorithm for solving Non-Linear Programming (NLP) problems.

    The QuasiNewton the class implements the Broyden–Fletcher–Goldfarb–Shanno (BFGS) Quasi-Newton Algorithm for solving Non-Linear Programming (NLP) problems. BFGS determines a search direction by deflecting the steepest descent direction vector (opposite the gradient) by * multiplying it by a matrix that approximates the inverse Hessian. Note, this implementation may be set up to work with the matrix 'b' (approximate Hessian) or directly with the 'binv' matrix (the inverse of 'b').

    minimize f(x) subject to g(x) <= 0 [ optionally g(x) == 0 ]

  22. class RevisedSimplex extends MinimizerLP

    The RevisedSimplex class solves Linear Programming (LP) problems using the Revised Simplex Algorithm.

    The RevisedSimplex class solves Linear Programming (LP) problems using the Revised Simplex Algorithm. Given a constraint matrix 'a', constant vector 'b' and cost vector 'c', find values for the solution/decision vector 'x' that minimize the objective function 'f(x)', while satisfying all of the constraints, i.e.,

    minimize f(x) = c x subject to a x <= b, x >= 0

    The Revised Simplex Algorithm operates on 'b_inv', which is the inverse of the basis-matrix ('ba' = 'B'). It has benefits over the Simplex Algorithm (less memory and reduced chance of round off errors).

  23. class Simplex extends MinimizerLP

    The Simplex class solves Linear Programming (LP) problems using a tableau based Simplex Algorithm.

    The Simplex class solves Linear Programming (LP) problems using a tableau based Simplex Algorithm. Given a constraint matrix 'a', limit/RHS vector 'b' and cost vector 'c', find values for the solution/decision vector 'x' that minimize the objective function 'f(x)', while satisfying all of the constraints, i.e.,

    minimize f(x) = c x subject to a x <= b, x >= 0

    In case of 'a_i x >= b_i', use '-b_i' as an indicator of a '>=' constraint. The program will flip such negative b_i back to positive as well as use a surplus variable instead of the usual slack variable, i.e., a_i x <= b_i => a_i x + s_i = b_i // use slack variable s_i with coefficient 1 a_i x >= b_i => a_i x + s_i = b_i // use surplus variable s_i with coefficient -1

    Creates an MM-by-NN simplex tableau with -- [0..M-1, 0..N-1] = a (constraint matrix) -- [0..M-1, N..M+N-1] = s (slack/surplus variable matrix) -- [0..M-1, NN-1] = b (limit/RHS vector) -- [M, 0..NN-2] = c (cost vector)

  24. class Simplex2P extends MinimizerLP

    The Simplex2P class solves Linear Programming (LP) problems using a tableau based Simplex Algorithm.

    The Simplex2P class solves Linear Programming (LP) problems using a tableau based Simplex Algorithm. Given a constraint matrix 'a', limit/RHS vector 'b' and cost vector 'c', find values for the solution/decision vector 'x' that minimize the objective function 'f(x)', while satisfying all of the constraints, i.e.,

    minimize f(x) = c x subject to a x <= b, x >= 0

    In case of 'a_i x >= b_i', use -b_i as an indicator of a ">=" constraint. The program will flip such negative b_i back to positive as well as use a surplus and artificial variable instead of the usual slack variable, i.e., a_i x <= b_i => a_i x + s_i = b_i // use slack variable s_i with coefficient 1 a_i x >= b_i => a_i x + s_i = b_i // use surplus variable s_i with coefficient -1 For each '>=' constraint, an artificial variable is introduced and put into the initial basis. These artificial variables must be removed from the basis during Phase I of the Two-Phase Simplex Algorithm. After this, or if there are no artificial variables, Phase II is used to find an optimal value for 'x' and the optimum value for 'f'.

    Creates an 'MM-by-nn' simplex tableau with -- [0..M-1, 0..N-1] = a (constraint matrix) -- [0..M-1, N..M+N-1] = s (slack/surplus variable matrix) -- [0..M-1, M+N..nn-2] = r (artificial variable matrix) -- [0..M-1, nn-1] = b (limit/RHS vector) -- [M, 0..nn-2] = c (cost vector)

  25. class SimplexBG extends MinimizerLP

    The SimplexBG class solves Linear Programming (LP) problems using the Bartels-Golub (BG) Simplex Algorithm.

    The SimplexBG class solves Linear Programming (LP) problems using the Bartels-Golub (BG) Simplex Algorithm. Given a constraint matrix 'a', constant vector 'b' and cost vector 'c', find values for the solution/decision vector 'x' that minimize the objective function 'f(x)', while satisfying all of the constraints, i.e.,

    minimize f(x) = c x subject to a x <= b, x >= 0

    The BG Simplex Algorithm performs LU Factorization/Decomposition of the basis-matrix ('ba' = 'B') rather than computing inverses ('b_inv'). It has benefits over the (Revised) Simplex Algorithm (less run-time, less memory, and much reduced chance of round off errors).

  26. class SimplexFT extends MinimizerLP

    The SimplexFT class solves Linear Programming (LP) problems using the Forrest-Tomlin (FT) Simplex Algorithm.

    The SimplexFT class solves Linear Programming (LP) problems using the Forrest-Tomlin (FT) Simplex Algorithm. Given a constraint matrix 'a', constant vector 'b' and cost vector 'c', find values for the solution/decision vector 'x' that minimize the objective function 'f(x)', while satisfying all of the constraints, i.e.,

    minimize f(x) = c x subject to a x <= b, x >= 0

    The FT Simplex Algorithm performs LU Factorization/Decomposition of the basis-matrix ('ba' = 'B') rather than computing inverses ('b_inv'). It has benefits over the (Revised) Simplex Algorithm (less run-time, less memory, and much reduced chance of round off errors).

  27. class StochasticGradient extends Minimizer with Error

    The StochasticGradient class solves unconstrained Non-Linear Programming (NLP) problems using the Stochastic Gradient Descent algorithm.

    The StochasticGradient class solves unconstrained Non-Linear Programming (NLP) problems using the Stochastic Gradient Descent algorithm. Given a function 'f' and a starting point 'x0', the algorithm computes the gradient and takes steps in the opposite direction. The algorithm iterates until it converges. The algorithm is stochastic in sense that only a single batch is used in each step of the optimimation. Examples (a number of rows) are are chosen for each batch. FIX - provide option to randomly select samples in batch

    See also

    leon.bottou.org/publications/pdf/compstat-2010.pdf dir_k = -gradient (x) minimize f(x)

  28. class WolfeLS extends LineSearch

    The WolfeLS class performs an inexact line search on 'f' to find a point 'x' that exhibits (1) sufficient decrease ('f(x)' enough less that 'f(0)') and (2) the slope at x is less steep than the slope at 0.

    The WolfeLS class performs an inexact line search on 'f' to find a point 'x' that exhibits (1) sufficient decrease ('f(x)' enough less that 'f(0)') and (2) the slope at x is less steep than the slope at 0. That is, the line search looks for a value for 'x' satisfying the two Wolfe conditions.

    f(x) <= f(0) + c1 * f'(0) * x Wolfe condition 1 (Armijo condition) |f'(x)| <= |c2 * f'(0)| Wolfe condition 2 (Strong version) f'(x) >= c2 * f'(0) Wolfe condition 2 (Weak version, more robust)

    It works on scalar functions (@see WolfeLSTest). If starting with a vector function f(x), simply define a new function g(y) = x0 + direction * y (@see WolfeLSTest2).

Value Members

  1. object BrentTest extends App

    The BrentTest object is used to test the Brent class.

    The BrentTest object is used to test the Brent class. > runMain scalation.minima.BrentTest

  2. object ConjugateGradientTest extends App

    The ConjugateGradientTest object is used to test the ConjugateGradient class.

    The ConjugateGradientTest object is used to test the ConjugateGradient class. > runMain scalation.minima.ConjugateGradientTest

  3. object CoordinateDescentTest extends App

    The CoordinateDescentTest object is used to test the CoordinateDescent class.

    The CoordinateDescentTest object is used to test the CoordinateDescent class. > runMain scalation.minima.CoordinateDescentTest

  4. object DualSimplexTest extends App

    The DualSimplexTest object is used to test the DualSimplex class.

  5. object Ftran

    The Ftran object ...

  6. object FunctionSelector extends Enumeration
  7. object GeneticAlgorithmTest extends App

    The GeneticAlgorithmTest object is used to test the GeneticAlgorithm class (unconstrained).

  8. object GoldenSectionLSTest extends App

    The GoldenSectionLSTest object is used to test the GoldenSectionLS class on scalar functions.

  9. object GoldenSectionLSTest2 extends App

    The GoldenSectionLSTest2 object is used to test the GoldenSectionLS class on vector functions.

  10. object GradientDescentTest extends App

    The GradientDescentTest object is used to test the GradientDescent class.

    The GradientDescentTest object is used to test the GradientDescent class. > runMain scalation.minima.GradientDescentTest

  11. object IntegerGoldenSectionLSTest extends App

    The IntegerGoldenSectionLSTest object is used to test the IntegerGoldenSectionLS class on scalar functions.

  12. object IntegerLPTest extends App

    The IntegerLPTest object is used to test the IntegerLP class.

    The IntegerLPTest object is used to test the IntegerLP class. real solution x = (.8, 1.6), f = 8.8 integer solution x = (2, 1), f = 10

    See also

    Linear Programming and Network Flows, Example 6.14

  13. object IntegerLocalSearchTest extends App

    The IntegerLocalSearchTest object is used to test the IntegerLocalSearch class (unconstrained).

  14. object IntegerLocalSearchTest2 extends App

    The IntegerLocalSearchTest2 object is used to test the IntegerLocalSearch class (constrained).

  15. object IntegerNLPTest extends App

    The IntegerNLPTest object is used to test the IntegerNLP class.

    The IntegerNLPTest object is used to test the IntegerNLP class. real solution x = (.8, 1.6), f = 8.8 integer solution x = (2, 1), f = 10

    See also

    Linear Programming and Network Flows, Example 6.14

  16. object IntegerTabuSearchTest extends App

    The IntegerTabuSearchTest object is used to test the IntegerTabuSearch class (unconstrained).

  17. object IntegerTabuSearchTest2 extends App

    The IntegerTabuSearchTest2 object is used to test the IntegerTabuSearch class (constrained).

  18. object L_BFGS_BTest extends App

    The L_BFGS_BTest object is used to test the L_BFGS_B class.

    The L_BFGS_BTest object is used to test the L_BFGS_B class. > runMain scalation.minima.L_BFGS_BTest

  19. object LassoAdmm

    The LassoAdmm class performs LASSO regression using Alternating Direction Method of Multipliers (ADMM).

    The LassoAdmm class performs LASSO regression using Alternating Direction Method of Multipliers (ADMM). Minimize the following objective function to find an optimal solutions for 'x'.

    argmin_x (1/2)||Ax − b||_2^2 + λ||x||_1

    A = data matrix b = response vector λ = weighting on the l_1 penalty x = solution (coefficient vector)

    See also

    https://web.stanford.edu/~boyd/papers/admm_distr_stats.html

    euler.stat.yale.edu/~tba3/stat612/lectures/lec23/lecture23.pdf

  20. object LassoAdmmTest extends App

    The LassoAdmmTest object tests LassoAdmm class using the following regression equation.

    The LassoAdmmTest object tests LassoAdmm class using the following regression equation.

    y = b dot x = b_0 + b_1*x_1 + b_2*x_2.

    See also

    statmaster.sdu.dk/courses/st111/module03/index.html > runMain scalation.minima.LassoAdmmTest

  21. object NLPTest1 extends App

    The NLPTest1 object used to test several Non-Linear Programming (NLP) algorithms on unconstrained problems.

    The NLPTest1 object used to test several Non-Linear Programming (NLP) algorithms on unconstrained problems. Algorithms: 'sdcs' - Gradient Descent with Custom Line Search 'sdgs' - Gradient Descent with Golden Section Line Search 'prcg' - Polak-Ribiere Conjugate Gradient with Golden Section Line Search 'sdws' - Gradient Descent with Wolfe Line Search 'bfgs' - Broyden–Fletcher–Goldfarb–Shanno with Wolfe Line Search

  22. object NLPTest2 extends App

    The NLPTest2 object used to test several Non-Linear Programming (NLP) algorithms on constrained problems.

  23. object NLPTestCases1 extends App

    The NLPTestCases1 object used to test several Non-Linear Programming (NLP) algorithms on unconstrained problems.

    The NLPTestCases1 object used to test several Non-Linear Programming (NLP) algorithms on unconstrained problems. Algorithms: 'sdcs' - Gradient Descent with Custom Line Search 'sdgs' - Gradient Descent with Golden Section Line Search 'prcg' - Polak-Ribiere Conjugate Gradient with Golden Section Line Search 'sdws' - Gradient Descent with Wolfe Line Search 'bfgs' - Broyden–Fletcher–Goldfarb–Shanno with Wolfe Line Search

  24. object NLPTestCases2 extends App

    The NLPTestCases2 object used to test several Non-Linear Programming (NLP) algorithms on constrained problems.

  25. object NelderMeadSimplexTest extends App

    The NelderMeadSimplexTest object is used to test the NelderMeadSimplex class.

    The NelderMeadSimplexTest object is used to test the NelderMeadSimplex class. > runMain scalation.minima.NelderMeadSimplexTest

  26. object NewtonRaphsonTest extends App

    The NewtonRaphsonTest object is used to test the NewtonRaphson class.

    The NewtonRaphsonTest object is used to test the NewtonRaphson class. > runMain scalation.minima.NewtonRaphsonTest

  27. object QuadraticSimplexTest extends App

    The QuadraticSimplexTest object is used to test the QuadraticSimplex class.

    The QuadraticSimplexTest object is used to test the QuadraticSimplex class. > runMain scalation.minima.QuadraticSimplexTest

  28. object QuasiNewtonTest extends App

    The QuasiNewtonTest object is used to test the QuasiNewton class.

    The QuasiNewtonTest object is used to test the QuasiNewton class. > runMain scalation.minima.QuasiNewtonTest

  29. object RevisedSimplexTest extends App

    The RevisedSimplexTest object is used to test the RevisedSimplex class.

    The RevisedSimplexTest object is used to test the RevisedSimplex class. > runMain scalation.minima.RevisedSimplexTest

  30. object Simplex2PTest extends App

    The Simplex2PTest object is used to test the Simplex2P class.

  31. object SimplexBGTest extends App

    The SimplexBGTest object is used to test the SimplexBG class.

  32. object SimplexFTTest extends App

    The SimplexFT object is used to test the SimplexFT class.

  33. object SimplexTest extends App

    The SimplexTest object is used to test the Simplex class.

  34. object StochasticGradientTest extends App

    The StochasticGradientTest object is used to test the StochasticGradient class.

    The StochasticGradientTest object is used to test the StochasticGradient class.

    See also

    scalation.analytics.RegressionTest3 > runMain scalation.minima.StochasticGradientTest

  35. object WolfeLSTest extends App

    The WolfeLSTest object is used to test the WolfeLS class on scalar functions.

  36. object WolfeLSTest2 extends App

    The WolfeLSTest2 object is used to test the WolfeLS class on vector functions.

Inherited from AnyRef

Inherited from Any

Ungrouped