NewtonRaphson

scalation.optimization.NewtonRaphson
class NewtonRaphson(f: FunctionS2S) extends Minimize

The NewtonRaphson class is used to find roots (zeros) for a one-dimensional (scalar) function 'f'. The solve method finds zeros for function 'f', while the optimize method finds local optima using the same logic, but applied to first and second derivatives.

Value parameters

f

the scalar function to find roots/optima of

Attributes

Graph
Supertypes
trait Minimize
class Object
trait Matchable
class Any

Members list

Value members

Concrete methods

def optimize(x0: Double): (Double, Double)

Optimize the function by finding a local optima close to the starting point/guess 'x0'. This version numerically approximates the first and second derivatives.

Optimize the function by finding a local optima close to the starting point/guess 'x0'. This version numerically approximates the first and second derivatives.

Value parameters

x0

the starting point/guess

Attributes

def solve(x0: Double, df: FunctionS2S): Double

Solve for/find a root close to the starting point/guess 'x0'. This version passes in a function for the derivative.

Solve for/find a root close to the starting point/guess 'x0'. This version passes in a function for the derivative.

Value parameters

df

the derivative of the function

x0

the starting point/guess

Attributes

def solve(x0: Double): Double

Solve for/find a root close to the starting point/guess 'x0'. This version numerically approximates the derivative.

Solve for/find a root close to the starting point/guess 'x0'. This version numerically approximates the derivative.

Value parameters

x0

the starting point/guess

Attributes

def solve(x0: VectorD, α: Double): FuncVec

Solve the Non-Linear Programming (NLP) problem by starting at x0 and iteratively moving down in the search space to a minimal point. Return the optimal point/vector x and its objective function value.

Solve the Non-Linear Programming (NLP) problem by starting at x0 and iteratively moving down in the search space to a minimal point. Return the optimal point/vector x and its objective function value.

Value parameters

x0

the starting point

α

the current learning rate

Attributes