Optimizer

scalation.modeling.neuralnet.Optimizer
See theOptimizer companion object

The Optimizer trait provides methods to optimize and auto_optimize parameters. Given training data x and y for a Neural Network, fit the parameters b.

Attributes

Companion
object
Graph
Supertypes
trait StoppingRule
trait MonitorLoss
class Object
trait Matchable
class Any
Known subtypes

Members list

Value members

Abstract methods

def optimize(x: MatrixD, y: MatrixD, b: NetParams, eta_: Double, f: Array[AFF]): (Double, Int)

Given training data x and y for a Neural Network, fit the parameters b, returning the value of the lose function and the number of epochs.

Given training data x and y for a Neural Network, fit the parameters b, returning the value of the lose function and the number of epochs.

Value parameters

b

the array of parameters (weights & biases) between every two adjacent layers

etaI

the lower and upper bounds of learning/convergence rate

f

the array of activation function family for every two adjacent layers

x

the m-by-n input matrix (training data consisting of m input vectors)

y

the m-by-ny output matrix (training data consisting of m output vectors)

Attributes

Concrete methods

def auto_optimize(x: MatrixD, y: MatrixD, b: NetParams, etaI: (Double, Double), f: Array[AFF], opti: (MatrixD, MatrixD, NetParams, Double, Array[AFF]) => (Double, Int)): (Double, Int)

Given training data x and y for a Neural Network, fit the parameters b, returning the value of the lose function and the number of epochs. Find the best learning rate within the interval etaI.

Given training data x and y for a Neural Network, fit the parameters b, returning the value of the lose function and the number of epochs. Find the best learning rate within the interval etaI.

Value parameters

b

the array of parameters (weights & biases) between every two adjacent layers

etaI

the lower and upper bounds of learning/convergence rate

f

the array of activation function family for every two adjacent layers

opti

the array of activation function family for every two adjacent layers

x

the m-by-n input matrix (training data consisting of m input vectors)

y

the m-by-ny output matrix (training data consisting of m output vectors)

Attributes

def freeze(flayer: Int): Unit

Freeze layer flayer during back-propogation (should only impact the optimize method in the classes extending this trait). FIX: make abstract (remove ???) and implement in extending classes

Freeze layer flayer during back-propogation (should only impact the optimize method in the classes extending this trait). FIX: make abstract (remove ???) and implement in extending classes

Value parameters

flayer

the layer to freeze, e.g., 1 => first hidden layer

Attributes

def permGenerator(m: Int, rando: Boolean): PermutedVecI

Return a permutation vector generator that will provide a random permutation of index positions for each call permGen.igen (e.g., used to select random batches).

Return a permutation vector generator that will provide a random permutation of index positions for each call permGen.igen (e.g., used to select random batches).

Value parameters

m

the number of data instances

rando

whether to use a random or fixed random number stream

Attributes

Inherited methods

def collectLoss(loss: Double): Unit

Collect the next value for the loss function.

Collect the next value for the loss function.

Value parameters

loss

the value of the loss function

Attributes

Inherited from:
MonitorLoss
def plotLoss(optName: String): Unit

Plot the loss function versus the epoch/major iterations.

Plot the loss function versus the epoch/major iterations.

Value parameters

optName

the name of optimization algorithm (alt. name of network)

Attributes

Inherited from:
MonitorLoss
def stopWhen(b: NetParams, sse: Double): (NetParams, Double)

Stop when too many steps have the cost measure (e.g., sse) increasing. Signal a stopping condition by returning the best parameter vector, else null.

Stop when too many steps have the cost measure (e.g., sse) increasing. Signal a stopping condition by returning the best parameter vector, else null.

Value parameters

b

the current parameter value (weights and biases)

sse

the current value of cost measure (e.g., sum of squared errors)

Attributes

Inherited from:
StoppingRule
def stopWhen(b: VectorD, sse: Double): (VectorD, Double)

Stop when too many steps have the cost measure (e.g., sse) increasing. Signal a stopping condition by returning the best parameter vector, else null.

Stop when too many steps have the cost measure (e.g., sse) increasing. Signal a stopping condition by returning the best parameter vector, else null.

Value parameters

b

the current value of the parameter vector

sse

the current value of cost measure (e.g., sum of squared errors)

Attributes

Inherited from:
StoppingRule

Inherited fields

protected val EPSILON: Double

Attributes

Inherited from:
StoppingRule