object Optimizer_ADAM
The Optimizer_SGDM
object provides functions to optimize the parameters/weights
of Neural Networks with various numbers of layers.
This optimizer used Stochastic Gradient Descent with Momentum.
- Alphabetic
- By Inheritance
- Optimizer_ADAM
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Value Members
- final def !=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def ##: Int
- Definition Classes
- AnyRef → Any
- final def ==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def asInstanceOf[T0]: T0
- Definition Classes
- Any
- def clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @native() @HotSpotIntrinsicCandidate()
- final def eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- def equals(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef → Any
- final def getClass(): Class[_ <: AnyRef]
- Definition Classes
- AnyRef → Any
- Annotations
- @native() @HotSpotIntrinsicCandidate()
- def hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native() @HotSpotIntrinsicCandidate()
- final def isInstanceOf[T0]: Boolean
- Definition Classes
- Any
- final def ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- final def notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native() @HotSpotIntrinsicCandidate()
- final def notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native() @HotSpotIntrinsicCandidate()
- def optimize(x: MatriD, y: VectoD, b: VectoD, eta_: Double = hp.default ("eta"), bSize: Int = hp.default ("bSize").toInt, maxEpochs: Int = hp.default ("maxEpochs").toInt, f1: AFF = f_sigmoid): (Double, Int)
Given training data 'x' and 'y' for a 2-layer, single output Neural Network, fit the parameter/weight vector 'b'.
Given training data 'x' and 'y' for a 2-layer, single output Neural Network, fit the parameter/weight vector 'b'. Iterate over several epochs, where each epoch divides the training set into 'nB' batches. Each batch is used to update the weights.
- x
the m-by-nx input matrix (training data consisting of m input vectors)
- y
the m output vector (training data consisting of m output vectors)
- b
the nx parameter/weight vector for layer 1->2 (input to output)
- eta_
the initial learning/convergence rate
- bSize
the batch size
- maxEpochs
the maximum number of training epochs/iterations
- f1
the activation function family for layers 1->2 (input to output)
- def optimize2(x: MatriD, y: MatriD, b: NetParam, eta_: Double = hp.default ("eta"), bSize: Int = hp.default ("bSize").toInt, maxEpochs: Int = hp.default ("maxEpochs").toInt, f1: AFF = f_sigmoid): (Double, Int)
Given training data 'x' and 'y' for a 2-layer, multi-output Neural Network, fit the parameter/weight matrix 'b'.
Given training data 'x' and 'y' for a 2-layer, multi-output Neural Network, fit the parameter/weight matrix 'b'. Iterate over several epochs, where each epoch divides the training set into 'nB' batches. Each batch is used to update the the parameter's weights.
- x
the m-by-nx input matrix (training data consisting of m input vectors)
- y
the m-by-ny output matrix (training data consisting of m output vectors)
- b
the parameters with nx-by-ny weight matrix for layer 1->2 (input to output)
- eta_
the initial learning/convergence rate
- bSize
the batch size
- maxEpochs
the maximum number of training epochs/iterations
- f1
the activation function family for layers 1->2 (input to output)
- def optimize2I(x: MatriD, y: MatriD, b: NetParam, etaI: PairD, bSize: Int = hp.default ("bSize").toInt, maxEpochs: Int = hp.default ("maxEpochs").toInt, f1: AFF = f_sigmoid): (Double, Int)
Given training data 'x' and 'y' for a 2-layer, multi-output Neural Network, fit the parameter/weight vector 'b'.
Given training data 'x' and 'y' for a 2-layer, multi-output Neural Network, fit the parameter/weight vector 'b'. Select the best learning rate within the interval 'etaI'.
- x
the m-by-nx input matrix (training data consisting of m input vectors)
- y
the m-by-by output matrix (training data consisting of m output vectors)
- b
the parameters with nx-by-ny weight matrix for layer 1->2 (input to output)
- etaI
the learning/convergence rate interval
- bSize
the batch size
- maxEpochs
the maximum number of training epochs/iterations
- f1
the activation function family for layers 1->2 (input to output)
- def optimize3(x: MatriD, y: MatriD, a: NetParam, b: NetParam, eta_: Double = hp.default ("eta"), bSize: Int = hp.default ("bSize").toInt, maxEpochs: Int = hp.default ("maxEpochs").toInt, f1: AFF = f_sigmoid, f2: AFF = f_lreLU): (Double, Int)
Given training data 'x' and 'y' for a 3-layer Neural Network, fit the parameters (weights and biases) 'a' & 'b'.
Given training data 'x' and 'y' for a 3-layer Neural Network, fit the parameters (weights and biases) 'a' & 'b'. Iterate over several epochs, where each epoch divides the training set into 'nB' batches. Each batch is used to update the weights.
- x
the m-by-nx input matrix (training data consisting of m input vectors)
- y
the m-by-ny output matrix (training data consisting of m output vectors)
- a
the parameters with nx-by-nz weight matrix & nz bias vector for layer 0->1
- b
the parameters with nz-by-ny weight matrix & ny bias vector for layer 1->2
- eta_
the initial learning/convergence rate
- bSize
the batch size
- maxEpochs
the maximum number of training epochs/iterations
- f1
the activation function family for layers 1->2 (input to hidden)
- f2
the activation function family for layers 2->3 (hidden to output)
- def optimize3I(x: MatriD, y: MatriD, a: NetParam, b: NetParam, etaI: PairD, bSize: Int = hp.default ("bSize").toInt, maxEpochs: Int = hp.default ("maxEpochs").toInt, f1: AFF = f_sigmoid, f2: AFF = f_lreLU): (Double, Int)
Given training data 'x' and 'y' for a 3-layer Neural Network, fit the parameters (weights and biases) 'a' & 'b'.
Given training data 'x' and 'y' for a 3-layer Neural Network, fit the parameters (weights and biases) 'a' & 'b'. Select the best learning rate within the interval 'etaI'.
- x
the m-by-nx input matrix (training data consisting of m input vectors)
- y
the m-by-ny output matrix (training data consisting of m output vectors)
- a
the parameters with nx-by-nz weight matrix for layer 1->2 (input to hidden)
- b
the parameters with nx-by-ny weight matrix for layer 1->2 (input to output)
- etaI
the learning/convergence rate interval
- bSize
the batch size
- maxEpochs
the maximum number of training epochs/iterations
- f1
the activation function family for layers 0->1 (input to hidden)
- f2
the activation function family for layers 1->2 (hidden to output)
- def optimizeI(x: MatriD, y: VectoD, b: VectoD, etaI: PairD, bSize: Int = hp.default ("bSize").toInt, maxEpochs: Int = hp.default ("maxEpochs").toInt, f1: AFF = f_sigmoid): (Double, Int)
Given training data 'x' and 'y' for a 2-layer, single output Neural Network, fit the parameter/weight vector 'b'.
Given training data 'x' and 'y' for a 2-layer, single output Neural Network, fit the parameter/weight vector 'b'. Select the best learning rate within the interval 'etaI'.
- x
the m-by-nx input matrix (training data consisting of m input vectors)
- y
the m output vector (training data consisting of m output vectors)
- b
the nx parameter/weight vector for layer 1->2 (input to output)
- etaI
the learning/convergence rate interval
- bSize
the batch size
- maxEpochs
the maximum number of training epochs/iterations
- f1
the activation function family for layers 1->2 (input to output)
- def optimizeX(x: MatriD, y: MatriD, b: NetParams, eta_: Double = hp.default ("eta"), bSize: Int = hp.default ("bSize").toInt, maxEpochs: Int = hp.default ("maxEpochs").toInt, lambda: Double = 0.0, f: Array[AFF] = Array (f_sigmoid, f_sigmoid, f_lreLU)): (Double, Int)
Given training data 'x' and 'y', fit the parameter/weight matrices 'bw' and bias vectors 'bi'.
Given training data 'x' and 'y', fit the parameter/weight matrices 'bw' and bias vectors 'bi'. Iterate over several epochs, where each epoch divides the training set into 'nB' batches. Each batch is used to update the weights.
- x
the m-by-nx input matrix (training data consisting of m input vectors)
- y
the m-by-ny output matrix (training data consisting of m output vectors)
- b
the array of parameters (weights & bias) between every two adjacent layers
- eta_
the initial learning/convergence rate
- bSize
the batch size
- maxEpochs
the maximum number of training epochs/iterations
- f
the array of activation function family for every two adjacent layers
- def optimizeXI(x: MatriD, y: MatriD, b: NetParams, etaI: PairD, bSize: Int = hp.default ("bSize").toInt, maxEpochs: Int = hp.default ("maxEpochs").toInt, lambda: Double = 0.0, f: Array[AFF] = Array (f_sigmoid, f_sigmoid, f_lreLU)): (Double, Int)
Given training data 'x' and 'y', for a multi-hidden layer Neural Network, fit the parameter array 'b', where each 'b(l)' contains a weight matrix and bias vector.
Given training data 'x' and 'y', for a multi-hidden layer Neural Network, fit the parameter array 'b', where each 'b(l)' contains a weight matrix and bias vector. Select the best learning rate within the interval 'etaI'.
- x
the m-by-nx input matrix (training data consisting of m input vectors)
- y
the m-by-ny output matrix (training data consisting of m output vectors)
- b
the array of parameters (weights & bias) between every two adjacent layers
- etaI
the lower and upper bounds of learning/convergence rate
- bSize
the batch size
- maxEpochs
the maximum number of training epochs/iterations
- f
the array of activation function family for every two adjacent layers
- final def synchronized[T0](arg0: => T0): T0
- Definition Classes
- AnyRef
- def toString(): String
- Definition Classes
- AnyRef → Any
- final def wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()
- final def wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
Deprecated Value Members
- def finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.Throwable]) @Deprecated
- Deprecated