the objective function to be maximized
the constraint function to be satisfied, if any
whether the constraint function must satisfy inequality or equality
Compute the beta function using the Polak-Ribiere (PR) technique.
Compute the beta function using the Polak-Ribiere (PR) technique. The function determines how much of the prior direction is mixed in with -gradient.
the gradient at the current point
the gradient at the next point
The objective function f re-scaled by a weighted penalty, if constrained.
The objective function f re-scaled by a weighted penalty, if constrained.
the coordinate values of the currrent point
Show the flaw by printing the error message.
Show the flaw by printing the error message.
the method where the error occurred
the error message
Perform an inexact (e.g., WolfeLS) or exact (e.g., GoldenSectionLS) line search in the direction 'dir', returning the distance 'z' to move in that direction.
Perform an inexact (e.g., WolfeLS) or exact (e.g., GoldenSectionLS) line search in the direction 'dir', returning the distance 'z' to move in that direction.
the current point
the direction to move in
Set the partial derivative functions.
Set the partial derivative functions. If these functions are available, they are more efficient and more accurate than estimating the values using difference quotients (the default approach).
the array of partial derivative functions
Use the Steepest-Descent algorithm rather than the default PR-CG algorithm.
Solve the following Non-Linear Programming (NLP) problem using PR-CG: max { f(x) | g(x) <= 0 }.
Solve the following Non-Linear Programming (NLP) problem using PR-CG: max { f(x) | g(x) <= 0 }. To use explicit functions for gradient, replace 'gradient (fg, x._1 + s)' with 'gradientD (df, x._1 + s)'.
the starting point
Polak-Ribiere Conjugate Gradient (PR-CG) Algorithm for solving Non-Linear Programming (NLP) problems. PR-CG determines a search direction as a weighted combination of the steepest descent direction (-gradient) and the previous direction. The weighting is set by the beta function, which for this implementation used the Polak-Ribiere technique.
dir_k = -gradient (x) + beta * dir_k-1
maximize f(x) subject to g(x) <= 0 [ optionally g(x) == 0 ]