package maxima
The maxima
package contains classes, traits and objects for
optimization to find maxima.
- Alphabetic
- By Inheritance
- maxima
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Type Members
- class CheckLP extends Error
The
CheckLP
class checks the solution to Linear Programming (LP) problems.The
CheckLP
class checks the solution to Linear Programming (LP) problems. Given a constraint matrix 'a', limit/RHS vector 'b' and cost vector 'c', determine if the values for the solution/decision vector 'x' maximizes the objective function 'f(x)', while satisfying all of the constraints, i.e.,maximize f(x) = c x subject to a x <= b, x >= 0
Check the feasibility and optimality of the solution.
- class ConjGradient extends Error
The
ConjGradient
implements the Polak-Ribiere Conjugate Gradient (PR-CG) Algorithm for solving Non-Linear Programming (NLP) problems.The
ConjGradient
implements the Polak-Ribiere Conjugate Gradient (PR-CG) Algorithm for solving Non-Linear Programming (NLP) problems. PR-CG determines a search direction as a weighted combination of the steepest descent direction (-gradient) and the previous direction. The weighting is set by the beta function, which for this implementation used the Polak-Ribiere technique.dir_k = -gradient (x) + beta * dir_k-1
maximize f(x) subject to g(x) <= 0 [ optionally g(x) == 0 ]
- class GoldenSectionLS extends AnyRef
The
GoldenSectionLS
class performs a line search on f(x) to find a maximal value for 'f'.The
GoldenSectionLS
class performs a line search on f(x) to find a maximal value for 'f'. It requires no derivatives and only one functional evaluation per iteration. A search is conducted from 'x1' (often 0) to 'xmax'. A guess for 'xmax' must be given, but can be made larger during the expansion phase, that occurs before the recursive golden section search is called. It works on scalar functions (seeGoldenSectionLSTest
). If starting with a vector function 'f(x)', simply define a new function 'g(y) = x0 + direction * y' (seeGoldenSectionLSTest2
). - class Hungarian extends AnyRef
The
Hungarian
is an O(n^3) implementation of the Hungarian algorithm (or Kuhn-Munkres algorithm). Find the maximum cost set of pairings between 'm' x-nodes (workers) and 'n' y-nodes (jobs) such that each worker is assigned to one job and each job has at most one worker assigned. It solves the maximum-weighted bipartite graph matching problem.The
Hungarian
is an O(n^3) implementation of the Hungarian algorithm (or Kuhn-Munkres algorithm). Find the maximum cost set of pairings between 'm' x-nodes (workers) and 'n' y-nodes (jobs) such that each worker is assigned to one job and each job has at most one worker assigned. It solves the maximum-weighted bipartite graph matching problem.maximize sum i = 0 .. m-1 { cost(x_i, y_i) }
Caveat: only works if 'm <= n' (i.e., there is at least one job for every worker).
- class IntegerProg extends AnyRef
The
IntegerProg
class solves Integer Linear Programming (ILP) and Mixed Integer Linear Programming (MILP) problems recursively using the Simplex algorithm.The
IntegerProg
class solves Integer Linear Programming (ILP) and Mixed Integer Linear Programming (MILP) problems recursively using the Simplex algorithm. First, an LP problem is solved. If the optimal solution vector x is entirely integer valued, the ILP is solved. If not, pick the first 'x_j' that is not integer valued. Define two new LP problems which bound 'x_j' to the integer below and above, respectively. Branch by solving each of these LP problems in turn. Prune by not exploring branches less optimal than the currently best integer solution. This technique is referred to as Branch and Bound. An exclusion set may be optionally provided for MILP problems. FIX: Use the Dual Simplex Algorithm for better performance.Given a constraint matrix 'a', limit/RHS vector 'b' and cost vector 'c', find values for the solution/decision vector 'x' that maximize the objective function 'f(x)', while satisfying all of the constraints, i.e.,
maximize f(x) = c x subject to a x <= b, x >= 0, x must be a vector of integers
Make b_i negative to indicate a ">=" constraint
- class RevisedSimplex extends Error
The
RevisedSimplex
class solves Linear Programming (LP) problems using the Revised Simplex Algorithm.The
RevisedSimplex
class solves Linear Programming (LP) problems using the Revised Simplex Algorithm. Given a constraint matrix 'a', constant vector 'b' and cost vector 'c', find values for the solution/decision vector 'x' that maximize the objective function 'f(x)', while satisfying all of the constraints, i.e.,maximize f(x) = c x subject to a x <= b, x >= 0
The revised algorithm has benefits over the Simplex Algorithm (less memory and reduced chance of round off errors).
- See also
math.uc.edu/~halpern/Linear.progr.folder/Handouts.lp.02/Revisedsimplex.pdf
- class Simplex2P extends Error
The
Simplex2P
class solves Linear Programming (LP) problems using a tableau based Simplex Algorithm.The
Simplex2P
class solves Linear Programming (LP) problems using a tableau based Simplex Algorithm. Given a constraint matrix 'a', limit/RHS vector 'b' and cost vector 'c', find values for the solution/decision vector 'x' that maximize the objective function 'f(x)', while satisfying all of the constraints, i.e.,maximize f(x) = c x subject to a x <= b, x >= 0
In case of 'a_i x >= b_i', use '-b_i' as an indicator of a '>=' constraint. The program will flip such negative b_i back to positive as well as use a surplus and artificial variable instead of the usual slack variable, i.e., a_i x <= b_i => a_i x + s_i = b_i // use slack variable s_i with coefficient 1 a_i x >= b_i => a_i x + s_i = b_i // use surplus variable s_i with coefficient -1 For each ">=" constraint, an artificial variable is introduced and put into the initial basis. These artificial variables must be removed from the basis during Phase I of the Two-Phase Simplex Algorithm. After this, or if there are no artificial variables, Phase II is used to find an optimal value for 'x' and the optimum value for 'f'.
Creates an MM-by-nn simplex tableau with -- [0..M-1, 0..N-1] = a (constraint matrix) -- [0..M-1, N..M+N-1] = s (slack/surplus variable matrix) -- [0..M-1, M+N..nn-2] = r (artificial variable matrix) -- [0..M-1, nn-1] = b (limit/RHS vector) -- [M, 0..nn-2] = c (cost vector)
Value Members
- object ConjGradientTest extends App
The
ConjGradientTest
object is used to test theConjGradient
class. - object GoldenSectionLSTest extends App
The
GoldenSectionLSTest
object is used to test theGoldenSectionLS
class on scalar functions. - object GoldenSectionLSTest2 extends App
The
GoldenSectionLSTest2
object is used to test theGoldenSectionLS
class on vector functions. - object Hungarian
The
Hungarian
companion object supplies factory methods to create a cost matrix and build aHungarian
object suitable to solving an assignment problem. - object HungarianTest extends App
The
HungarianTest
object is used to test theHungarian
class.The
HungarianTest
object is used to test theHungarian
class. > runMain scalation.maxima.HungarianTest - object HungarianTest2 extends App
The
HungarianTest2
object is used to test theHungarian
class.The
HungarianTest2
object is used to test theHungarian
class. > runMain scalation.maxima.HungarianTest2 - object IntegerProgTest extends App
The
IntegerProgTest
object is used to test theIntegerProg
class.The
IntegerProgTest
object is used to test theIntegerProg
class. real solution x = (2.25, 3.75), f = 41.25 integer solution x = (0, 5), f = 40- See also
web.mit.edu/15.053/www/AMP-Chapter-09.pdf
- object RevisedSimplexTest extends App
The
RevisedSimplexTest
tests the Revised Simplex Algorithm class with the following maximization problem: Maximize z = 2x_0 + 3x_1 + 4x_2 Subject to 3x_0 + 2x_1 + 1x_2 + 1y_3 + 0y_4 = 10 2x_0 + 5x_1 + 3x_2 + 0y_3 + 1y_4 = 15 where z is the objective variable, x are the decision variables and y are slack variables. - object Simplex2PTest extends App
The
Simplex2PTest
object is used to test theSimplex2P
class.