scalation

maxima

package maxima

The maxima package contains classes, traits and objects for optimization to find maxima.

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. maxima
  2. AnyRef
  3. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Type Members

  1. class CheckLP extends Error

    This class checks the solution to Linear Programming (LP) problems.

    This class checks the solution to Linear Programming (LP) problems. Given a constraint matrix 'a', limit/RHS vector 'b' and cost vector 'c', determine if the values for the solution/decision vector 'x' maximizes the objective function f(x), while satisfying all of the constraints, i.e.,

    maximize f(x) = c x subject to a x <= b, x >= 0

    Check the feasibility and optimality of the solution.

  2. class ConjGradient extends Error

    Polak-Ribiere Conjugate Gradient (PR-CG) Algorithm for solving Non-Linear Programming (NLP) problems.

    Polak-Ribiere Conjugate Gradient (PR-CG) Algorithm for solving Non-Linear Programming (NLP) problems. PR-CG determines a search direction as a weighted combination of the steepest descent direction (-gradient) and the previous direction. The weighting is set by the beta function, which for this implementation used the Polak-Ribiere technique.

    dir_k = -gradient (x) + beta * dir_k-1

    maximize f(x) subject to g(x) <= 0 [ optionally g(x) == 0 ]

  3. class GoldenSectionLS extends AnyRef

    This class performs a line search on f(x) to find a maximal value for f.

    This class performs a line search on f(x) to find a maximal value for f. It requires no derivatives and only one functional evaluation per iteration. A search is conducted from x1 (often 0) to xmax. A guess for xmax must be given, but can be made larger during the expansion phase, that occurs before the recursive golden section search is called. It works on scalar functions (see GoldenSectionLSTest). If starting with a vector function f(x), simply define a new function g(y) = x0 + direction * y (see GoldenSectionLSTest2).

  4. class Hungarian extends AnyRef

    This is an O(n^3) implementation of the Hungarian algorithm (or Kuhn-Munkres algorithm). Find the maximum cost set of pairings between m x-nodes (workers) and n y-nodes (jobs) such that each worker is assigned to one job and each job has at most one worker assigned. It solves the maximum-weighted bipartite graph matching problem.

    This is an O(n^3) implementation of the Hungarian algorithm (or Kuhn-Munkres algorithm). Find the maximum cost set of pairings between m x-nodes (workers) and n y-nodes (jobs) such that each worker is assigned to one job and each job has at most one worker assigned. It solves the maximum-weighted bipartite graph matching problem.

    maximize sum i = 0 .. m-1 { cost(x_i, y_i) }

    Caveat: only works if m <= n (i.e., there is at least one job for every worker).

  5. class IntegerProg extends AnyRef

    This class solves Integer Linear Programming (ILP) and Mixed Integer Linear Programming (MILP) problems recursively using the Simplex algorithm.

    This class solves Integer Linear Programming (ILP) and Mixed Integer Linear Programming (MILP) problems recursively using the Simplex algorithm. First, an LP problem is solved. If the optimal solution vector x is entirely integer valued, the ILP is solved. If not, pick the first x_j that is not integer valued. Define two new LP problems which bound x_j to the integer below and above, respectively. Branch by solving each of these LP problems in turn. Prune by not exploring branches less optimal than the currently best integer solution. This technique is referred to as Branch and Bound. An exclusion set may be optionally provided for MILP problems. TODO: Use the Dual Simplex Algorithm for better performance.

    Given a constraint matrix 'a', limit/RHS vector 'b' and cost vector 'c', find values for the solution/decision vector 'x' that maxmize the objective function f(x), while satisfying all of the constraints, i.e.,

    maximize f(x) = c x subject to a x <= b, x >= 0, x must be a vector of integers

    Make b_i negative to indicate a ">=" constraint

  6. class RevisedSimplex extends Error

    This class solves Linear Programming (LP) problems using the Revised Simplex Algorithm.

    This class solves Linear Programming (LP) problems using the Revised Simplex Algorithm. Given a constraint matrix a, constant vector b and cost vector c, find values for the solution/decision vector x that maximize the objective function f(x), while satisfying all of the constraints, i.e.,

    maximize f(x) = c x subject to a x <= b, x >= 0

    The revised algorithm has benefits over the Simplex Algorithm (less memory and reduced chance of round off errors).

    See also

    math.uc.edu/~halpern/Linear.progr.folder/Handouts.lp.02/Revisedsimplex.pdf

  7. class Simplex2P extends Error

    This class solves Linear Programming (LP) problems using a tableau based Simplex Algorithm.

    This class solves Linear Programming (LP) problems using a tableau based Simplex Algorithm. Given a constraint matrix 'a', limit/RHS vector 'b' and cost vector 'c', find values for the solution/decision vector 'x' that maximize the objective function f(x), while satisfying all of the constraints, i.e.,

    maximize f(x) = c x subject to a x <= b, x >= 0

    In case of "a_i x >= b_i", use -b_i as an indicator of a ">=" constraint. The program will flip such negative b_i back to positive as well as use a surplus and artificial variable instead of the usual slack variable, ie., a_i x <= b_i => a_i x + s_i = b_i // use slack variable s_i with coeff 1 a_i x >= b_i => a_i x + s_i = b_i // use surplus variable s_i with coeff -1 For each ">=" constraint, an artificial variable is introduced and put into the initial basis. These artificial variables must be removed from the basis during Phase I of the Two-Phase Simplex Algorithm. After this, or if there are no artificial variables, Phase II is used to find an optimal value for x and the optimum value for f.

    Creates an MM-by-nn simplex tableau with -- [0..M-1, 0..N-1] = a (constraint matrix) -- [0..M-1, N..M+N-1] = s (slack/surplus variable matrix) -- [0..M-1, M+N..nn-2] = r (artificial variable matrix) -- [0..M-1, nn-1] = b (limit/RHS vector) -- [M, 0..nn-2] = c (cost vector)

Value Members

  1. object ConjGradientTest extends App

    This object is used to test the ConjGradient class.

  2. object GoldenSectionLSTest extends App

    This object is used to test the GoldenSectionLS class on scalar functions.

  3. object GoldenSectionLSTest2 extends App

    This object is used to test the GoldenSectionLS class on vector functions.

  4. object Hungarian extends App

    This object is used to test the Hungarian class.

  5. object IntegerProgTest extends App

    This object is used to test the IntegerProg class.

    This object is used to test the IntegerProg class. real solution x = (2.25, 3.75), f = 41.25 integer solution x = (0, 5), f = 40

    See also

    web.mit.edu/15.053/www/AMP-Chapter-09.pdf

  6. object RevisedSimplexTest extends App

    Test the Revised Simplex Algorithm class with the following maximization problem: Maximize z = 2x_0 + 3x_1 + 4x_2 Subject to 3x_0 + 2x_1 + 1x_2 + 1y_3 + 0y_4 = 10 2x_0 + 5x_1 + 3x_2 + 0y_3 + 1y_4 = 15 where z is the objective variable, x are the decision variables and y are slack variables.

  7. object Simplex2PTest extends App

    This object is used to test the Simplex2P class.

Inherited from AnyRef

Inherited from Any

Ungrouped