This class solves Linear Programming (LP) problems using the Bartels-Golub (BG) Simplex Algorithm.
This class solves Linear Programming (LP) problems using the Bartels-Golub (BG) Simplex Algorithm. Given a constraint matrix 'a', constant vector 'b' and cost vector 'c', find values for the solution/decision vector 'x' that minimize the objective function 'f(x)', while satisfying all of the constraints, i.e.,
minimize f(x) = c x subject to a x <= b, x >= 0
The BG Simplex Algorithm performs LU Fractorization/Decomposition of the basis-matrix ('ba' = 'B') rather than computing inverses ('b_inv'). It has benefits over the (Revised) Simplex Algorithm (less runtime, less memory, and much reduced chance of round off errors).
This class checks the solution to Linear Programming (LP) problems.
This class checks the solution to Linear Programming (LP) problems. Given a constraint matrix 'a', limit/RHS vector 'b' and cost vector 'c', determine if the values for the solution/decision vector 'x' minimizes the objective function f(x), while satisfying all of the constraints, i.e.,
minimize f(x) = c x subject to a x <= b, x >= 0
Check the feasibility and optimality of the solution.
Polak-Ribiere Conjugate Gradient (PR-CG) Algorithm for solving Non-Linear Programming (NLP) problems.
Polak-Ribiere Conjugate Gradient (PR-CG) Algorithm for solving Non-Linear Programming (NLP) problems. PR-CG determines a search direction as a weighted combination of the steepest descent direction (-gradient) and the previous direction. The weighting is set by the beta function, which for this implementation used the Polak-Ribiere technique.
dir_k = -gradient (x) + beta * dir_k-1
minimize f(x) subject to g(x) <= 0 [ optionally g(x) == 0 ]
This class solves Linear Programming (LP) problems using a tableau based Dual Simplex Algorithm.
This class solves Linear Programming (LP) problems using a tableau based Dual Simplex Algorithm. It is particularly useful when re-optimizing after a constraint has been added. The algorithm starts with an infeasible super-optimal solution and moves toward (primal) feasibility and optimality.
Given a constraint matrix 'a', limit/RHS vector 'b' and cost vector 'c', find values for the solution/decision vector 'x' that minimize the objective function f(x), while satisfying all of the constraints, i.e.,
minimize f(x) = c x subject to a x <= b, x >= 0
Creates an MM-by-NN simplex tableau with -- [0..M-1, 0..N-1] = a (constraint matrix) -- [0..M-1, N..M+N-1] = s (slack/surplus variable matrix) -- [0..M-1, NN-1] = b (limit/RHS vector) -- [M, 0..NN-2] = c (cost vector)
This class performs local search to find minima of functions defined on integer vector domains (z^n).
This class performs local search to find minima of functions defined on integer vector domains (z^n).
minimize f(x) subject to g(x) <= 0, x in Z^n
This class performs a line search on f(x) to find a minimal value for f.
This class performs a line search on f(x) to find a minimal value for f. It requires no derivatives and only one functional evaluation per iteration. A search is conducted from x1 (often 0) to xmax. A guess for xmax must be given, but can be made larger during the expansion phase, that occurs before the recursive golden section search is called. It works on scalar functions (see GoldenSectionLSTest). If starting with a vector function f(x), simply define a new function g(y) = x0 + direction * y (see GoldenSectionLSTest2).
This class performs a line search on f(x) to find a minimal value for f.
This class performs a line search on f(x) to find a minimal value for f. It requires no derivatives and only one functional evaluation per iteration. A search is conducted from x1 (often 0) to xmax. A guess for xmax must be given, but can be made larger during the expansion phase, that occurs before the recursive golden section search is called. It works on scalar functions (see IntegerGoldenSectionLSTest). If starting with a vector function f(x), simply define a new function g(y) = x0 + direction * y (see IntegerGoldenSectionLSTest2).
This class solves Integer Linear Programming (ILP) and Mixed Integer Linear Programming (MILP) problems recursively using the Simplex algorithm.
This class solves Integer Linear Programming (ILP) and Mixed Integer Linear Programming (MILP) problems recursively using the Simplex algorithm. First, an LP problem is solved. If the optimal solution vector x is entirely integer valued, the ILP is solved. If not, pick the first x_j that is not integer valued. Define two new LP problems which bound x_j to the integer below and above, respectively. Branch by solving each of these LP problems in turn. Prune by not exploring branches less optimal than the currently best integer solution. This technique is referred to as Branch and Bound. An exclusion set may be optionally provided for MILP problems. TODO: Use the Dual Simplex Algorithm for better performance.
Given a constraint matrix 'a', limit/RHS vector 'b' and cost vector 'c', find values for the solution/decision vector 'x' that minmize the objective function f(x), while satisfying all of the constraints, i.e.,
minimize f(x) = c x subject to a x <= b, x >= 0, some x_i must be integer valued
Make b_i negative to indicate a ">=" constraint
This class performs local search to find minima of functions defined on integer vector domains (z^n).
This class performs local search to find minima of functions defined on integer vector domains (z^n).
minimize f(x) subject to g(x) <= 0, x in Z^n
This class solves Integer Non-Linear Programming (INLP) and Mixed Integer Linear Non-Programming (MINLP) problems recursively using the Simplex algorithm.
This class solves Integer Non-Linear Programming (INLP) and Mixed Integer Linear Non-Programming (MINLP) problems recursively using the Simplex algorithm. First, an NLP problem is solved. If the optimal solution vector x is entirely integer valued, the INLP is solved. If not, pick the first x_j that is not integer valued. Define two new NLP problems which bound x_j to the integer below and above, respectively. Branch by solving each of these NLP problems in turn. Prune by not exploring branches less optimal than the currently best integer solution. This technique is referred to as Branch and Bound. An exclusion set may be optionally provided for MINLP problems.
Given an objective function f(x) and a constraint function g(x), find values for the solution/decision vector 'x' that minimize the objective function f(x), while satisfying the constraint function, i.e.,
minimize f(x) subject to g(x) <= 0, some x_i must integer-valued
Make b_i negative to indicate a ">=" constraint
This class performs tabu search to find minima of functions defined on integer vector domains (z^n). Tabu search will not re-visit points already deemed sub-optimal.
This class performs tabu search to find minima of functions defined on integer vector domains (z^n). Tabu search will not re-visit points already deemed sub-optimal.
minimize f(x) subject to g(x) <= 0, x in Z^n
This trait specifies the pattern for Line Search (LS) algorithms that perform line search on f(x) to find an x-value that minimizes a function f.
This trait sets the pattern for optimization algorithms for solving Non-Linear Programming (NLP) problems of the form:
This trait sets the pattern for optimization algorithms for solving Non-Linear Programming (NLP) problems of the form:
minimize f(x) subject to g(x) <= 0 [ optionally g(x) == 0 ]
where f is the objective function to be minimized g is the constraint function to be satisfied, if any
Classes mixing in this trait must implement a function (fg) that rolls the constraints into the objective functions as penalties for constraint violation, a one-dimensional Line Search (LS) algorithm (lineSearch) and an iterative method (solve) that searches for improved solutions (x-vectors with lower objective function values (f(x)).
This trait sets the pattern for optimization algorithms for solving Linear Programming (NLP) problems of the form:
This trait sets the pattern for optimization algorithms for solving Linear Programming (NLP) problems of the form:
minimize c x subject to a x <= b, x >= 0
where a is the constraint matrix b is the limit/RHS vector c is the cost vector
Classes mixing in this trait must implement an objective function (objF) an iterative method (solve) that searches for improved solutions (x-vectors with lower objective function values.
This class solves unconstrained Non-Linear Programming (NLP) problems using the Nelder-Mead Simplex algorithm.
This class solves unconstrained Non-Linear Programming (NLP) problems using the Nelder-Mead Simplex algorithm. Given a function 'f' and its dimension 'n', the algorithm moves a simplex defined by n + 1 points in order to find an optimal solution. The algorithm is derivative-free.
minimize f(x)
This class is used to find roots (zeros) for a one-dimensional (scalar) function f.
This class is used to find roots (zeros) for a one-dimensional (scalar) function f. If f is the derivative of some other function g, then this technique can be used to find optima for g. Caveat: Use Conjugate Gradient or Quasi-Newton for complex optimizations.
This class solves Quadratic Programming (QP) problems using the Quadratic Simplex Algorithm.
This class solves Quadratic Programming (QP) problems using the Quadratic Simplex Algorithm. Given a constraint matrix 'a', constant vector 'b', cost matrix 'q' and cost vector 'c', find values for the solution/decision vector x that minimize the objective function f(x), while satisfying all of the constraints, i.e.,
minimize f(x) = 1/2 x q x + c x subject to a x <= b, x >= 0
Creates an MM-by-NN simplex tableau. This implementation is restricted to linear constraints (a x <= b) and q being a positive semi-definite matrix. Pivoting must now also handle nonlinear complementary slackness
http://www.engineering.uiowa.edu/~dbricker/lp_stacks.html
Broyden–Fletcher–Goldfarb–Shanno (BFGS) Quasi-Newton Algorithm for solving Non-Linear Programming (NLP) problems.
Broyden–Fletcher–Goldfarb–Shanno (BFGS) Quasi-Newton Algorithm for solving Non-Linear Programming (NLP) problems. BFGS determines a search direction by deflecting the steepest descent direction vector (opposite the gradient) by multiplying it by a matrix that approximates the inverse Hessian. Note, this implementation may be set up to work with the matrix 'b' (approximate Hessian) or directly with the 'binv' matrix (the inverse of b).
minimize f(x) subject to g(x) <= 0 [ optionally g(x) == 0 ]
This class solves Linear Programming (LP) problems using the Revised Simplex Algorithm.
This class solves Linear Programming (LP) problems using the Revised Simplex Algorithm. Given a constraint matrix 'a', constant vector 'b' and cost vector 'c', find values for the solution/decision vector 'x' that minimize the objective function 'f(x)', while satisfying all of the constraints, i.e.,
minimize f(x) = c x subject to a x <= b, x >= 0
The Revised Simplex Algorithm operates on 'b_inv', which is the inverse of the basis-matrix ('ba' = 'B'). It has benefits over the Simplex Algorithm (less memory and reduced chance of round off errors).
This class solves Linear Programming (LP) problems using a tableau based Simplex Algorithm.
This class solves Linear Programming (LP) problems using a tableau based Simplex Algorithm. Given a constraint matrix 'a', limit/RHS vector 'b' and cost vector 'c', find values for the solution/decision vector 'x' that minimize the objective function f(x), while satisfying all of the constraints, i.e.,
minimize f(x) = c x subject to a x <= b, x >= 0
In case of "a_i x >= b_i", use -b_i as an indicator of a ">=" constraint. The program will flip such negative b_i back to positive as well as use a surplus variable instead of the usual slack variable, i.e., a_i x <= b_i => a_i x + s_i = b_i // use slack variable s_i with coeff 1 a_i x >= b_i => a_i x + s_i = b_i // use surplus variable s_i with coeff -1
Creates an MM-by-NN simplex tableau with -- [0..M-1, 0..N-1] = a (constraint matrix) -- [0..M-1, N..M+N-1] = s (slack/surplus variable matrix) -- [0..M-1, NN-1] = b (limit/RHS vector) -- [M, 0..NN-2] = c (cost vector)
This class solves Linear Programming (LP) problems using a tableau based Simplex Algorithm.
This class solves Linear Programming (LP) problems using a tableau based Simplex Algorithm. Given a constraint matrix 'a', limit/RHS vector 'b' and cost vector 'c', find values for the solution/decision vector 'x' that minimize the objective function f(x), while satisfying all of the constraints, i.e.,
minimize f(x) = c x subject to a x <= b, x >= 0
In case of "a_i x >= b_i", use -b_i as an indicator of a ">=" constraint. The program will flip such negative b_i back to positive as well as use a surplus and artificial variable instead of the usual slack variable, i.e., a_i x <= b_i => a_i x + s_i = b_i // use slack variable s_i with coeff 1 a_i x >= b_i => a_i x + s_i = b_i // use surplus variable s_i with coeff -1 For each ">=" constraint, an artificial variable is introduced and put into the initial basis. These artificial variables must be removed from the basis during Phase I of the Two-Phase Simplex Algorithm. After this, or if there are no artificial variables, Phase II is used to find an optimal value for x and the optimum value for f.
Creates an MM-by-nn simplex tableau with -- [0..M-1, 0..N-1] = a (constraint matrix) -- [0..M-1, N..M+N-1] = s (slack/surplus variable matrix) -- [0..M-1, M+N..nn-2] = r (artificial variable matrix) -- [0..M-1, nn-1] = b (limit/RHS vector) -- [M, 0..nn-2] = c (cost vector)
This class solves unconstrained Non-Linear Programming (NLP) problems using the Steepest Descent algorithm.
This class solves unconstrained Non-Linear Programming (NLP) problems using the Steepest Descent algorithm. Given a function 'f' and a starting point 'x', the algorithm computes the gradient and takes steps in the opposite direction. The algorithm iterates until it converges. The class assumes that partial derivative functions are not availble unless explicitly given via the setDerivatives method.
dir_k = -gradient (x)
minimize f(x)
This class performs an inexact line search on f to find a point x that exhibits (1) sufficient decrease (f(x) enough less that f(0)) and (2) the slope at x is less steep than the slope at 0.
This class performs an inexact line search on f to find a point x that exhibits (1) sufficient decrease (f(x) enough less that f(0)) and (2) the slope at x is less steep than the slope at 0. That is, the line search looks for a value for x satisfying the two Wolfe conditions.
f(x) <= f(0) + c1 * f'(0) * x Wolfe condition 1 (Armijo condition) |f'(x)| <= |c2 * f'(0)| Wolfe condition 2 (Strong version) f'(x) >= c2 * f'(0) Wolfe condition 2 (Weak version, more robust)
It works on scalar functions (see WolfeLSTest). If starting with a vector function f(x), simply define a new function g(y) = x0 + direction * y (see WolfeLSTest2).
This object is used to test the BGSimplex class.
This object is used to test the ConjGradient class.
This object is used to test the DualSimplex class.
This object is used to test the GeneticAlgorithm class (unconstrained).
This object is used to test the GoldenSectionLS class on scalar functions.
This object is used to test the GoldenSectionLS class on vector functions.
This object is used to test the IntegerGoldenSectionLS class on scalar functions.
This object is used to test the IntegerLP class.
This object is used to test the IntegerLP class. real solution x = (.8, 1.6), f = 8.8 integer solution x = (2, 1), f = 10
Linear Programming and Network Flows, Example 6.14
This object is used to test the IntegerLocalSearch class (unconstrained).
This object is used to test the IntegerLocalSearch class (constrained).
This object is used to test the IntegerNLP class.
This object is used to test the IntegerNLP class. real solution x = (.8, 1.6), f = 8.8 integer solution x = (2, 1), f = 10
Linear Programming and Network Flows, Example 6.14
This object is used to test the IntegerTabuSearch class (unconstrained).
This object is used to test the IntegerTabuSearch class (constrained).
The NLPTest1
object used to test several Non-Linear Programming (NLP) algorithms
on unconstrained problems.
The NLPTest1
object used to test several Non-Linear Programming (NLP) algorithms
on unconstrained problems.
Algorithms:
sdcs - Steepest Descent with Custom Line Search
sdgs - Steepest Descent with Golden Section Line Search
prcg - Polak-Ribiere Conjugate Gradient with Golden Section Line Search
sdws - Steepest Descent with Wolfe Line Search
bfgs - Broyden–Fletcher–Goldfarb–Shanno with Wolfe Line Search
The NLPTest2
object used to test several Non-Linear Programming (NLP) algorithms
on constrained problems.
This object used to test several Non-Linear Programming (NLP) algorithms on unconstrained problems.
This object used to test several Non-Linear Programming (NLP) algorithms on unconstrained problems. Algorithms: sdcs - Steepest Descent with Custom Line Search sdgs - Steepest Descent with Golden Section Line Search prcg - Polak-Ribiere Conjugate Gradient with Golden Section Line Search sdws - Steepest Descent with Wolfe Line Search bfgs - Broyden–Fletcher–Goldfarb–Shanno with Wolfe Line Search
This object used to test several Non-Linear Programming (NLP) algorithms on constrained problems.
This object is used to test the NelderMeadSimplex class.
This object is used to test the NewtonRaphson class.
This object is used to test the QuadraticSimplex class.
This object is used to test the QuasiNewton class.
This object is used to test the RevisedSimplex class.
This object is used to test the Simplex2P class.
This object is used to test the Simplex class.
This object is used to test the SteepestDescent class.
This object is used to test the WolfeLS class on scalar functions.
This object is used to test the WolfeLS class on vector functions.
The minima package contains classes, traits and objects for optimization to find minima.