Packages

o

scalation.analytics

Optimizer_SGD

object Optimizer_SGD extends Error

The Optimizer object provides functions to optimize the parameters (weights and biases) of Neural Networks with various numbers of layers. This optimizer used Stochastic Gradient Descent

Linear Supertypes
Error, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. Optimizer_SGD
  2. Error
  3. AnyRef
  4. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  5. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native() @HotSpotIntrinsicCandidate()
  6. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  7. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  8. final def flaw(method: String, message: String): Unit
    Definition Classes
    Error
  9. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native() @HotSpotIntrinsicCandidate()
  10. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native() @HotSpotIntrinsicCandidate()
  11. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  12. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  13. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native() @HotSpotIntrinsicCandidate()
  14. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native() @HotSpotIntrinsicCandidate()
  15. def optimize(x: MatriD, y: VectoD, b: VectoD, eta_: Double = hp.default ("eta"), bSize: Int = hp.default ("bSize").toInt, maxEpochs: Int = hp.default ("maxEpochs").toInt, f1: AFF = f_sigmoid): (Double, Int)

    Given training data 'x' and 'y' for a 2-layer, single output Neural Network, fit the parameter/weight vector 'b'.

    Given training data 'x' and 'y' for a 2-layer, single output Neural Network, fit the parameter/weight vector 'b'. Iterate over several epochs, where each epoch divides the training set into 'nB' batches. Each batch is used to update the weights.

    x

    the m-by-nx input matrix (training data consisting of m input vectors)

    y

    the m output vector (training data consisting of m output vectors)

    b

    the nx parameter/weight vector for layer 1->2 (input to output)

    eta_

    the initial learning/convergence rate

    bSize

    the batch size

    maxEpochs

    the maximum number of training epochs/iterations

    f1

    the activation function family for layers 1->2 (input to output)

  16. def optimize2(x: MatriD, y: MatriD, b: NetParam, eta_: Double = hp.default ("eta"), bSize: Int = hp.default ("bSize").toInt, maxEpochs: Int = hp.default ("maxEpochs").toInt, f1: AFF = f_sigmoid): (Double, Int)

    Given training data 'x' and 'y' for a 2-layer, multi-output Neural Network, fit the parameter/weight matrix 'b'.

    Given training data 'x' and 'y' for a 2-layer, multi-output Neural Network, fit the parameter/weight matrix 'b'. Iterate over several epochs, where each epoch divides the training set into 'nB' batches. Each batch is used to update the parameter's weights.

    x

    the m-by-nx input matrix (training data consisting of m input vectors)

    y

    the m-by-ny output matrix (training data consisting of m output vectors)

    b

    the parameters with an nx-by-ny weight matrix for layer 1->2 (input to output)

    eta_

    the initial learning/convergence rate

    bSize

    the batch size

    maxEpochs

    the maximum number of training epochs/iterations

    f1

    the activation function family for layers 1->2 (input to output)

  17. def optimize2I(x: MatriD, y: MatriD, b: NetParam, etaI: PairD, bSize: Int = hp.default ("bSize").toInt, maxEpochs: Int = hp.default ("maxEpochs").toInt, f1: AFF = f_sigmoid): (Double, Int)

    Given training data 'x' and 'y' for a 2-layer, multi-output Neural Network, fit the parameter/weight vector 'b'.

    Given training data 'x' and 'y' for a 2-layer, multi-output Neural Network, fit the parameter/weight vector 'b'. Select the best learning rate within the interval 'etaI'.

    x

    the m-by-nx input matrix (training data consisting of m input vectors)

    y

    the m-by-ny output matrix (training data consisting of m output vectors)

    b

    the parameter with nx-by-ny weight matrix for layer 1->2 (input to output)

    etaI

    the learning/convergence rate interval

    bSize

    the batch size

    maxEpochs

    the maximum number of training epochs/iterations

    f1

    the activation function family for layers 1->2 (input to output)

  18. def optimize3(x: MatriD, y: MatriD, a: NetParam, b: NetParam, eta_: Double = hp.default ("eta"), bSize: Int = hp.default ("bSize").toInt, maxEpochs: Int = hp.default ("maxEpochs").toInt, f1: AFF = f_sigmoid, f2: AFF = f_lreLU): (Double, Int)

    Given training data 'x' and 'y' for a 3-layer Neural Network, fit the parameters (weights and biases) 'a' & 'b'.

    Given training data 'x' and 'y' for a 3-layer Neural Network, fit the parameters (weights and biases) 'a' & 'b'. Iterate over several epochs, where each epoch divides the training set into 'nB' batches. Each batch is used to update the weights.

    x

    the m-by-nx input matrix (training data consisting of m input vectors)

    y

    the m-by-ny output matrix (training data consisting of m output vectors)

    a

    the parameters with nx-by-nz weight matrix & nz bias vector for layer 0->1

    b

    the parameters with nz-by-ny weight matrix & ny bias vector for layer 1->2

    eta_

    the initial learning/convergence rate

    bSize

    the batch size

    maxEpochs

    the maximum number of training epochs/iterations

    f1

    the activation function family for layers 0->1 (input to hidden)

    f2

    the activation function family for layers 1->2 (hidden to output)

  19. def optimize3I(x: MatriD, y: MatriD, a: NetParam, b: NetParam, etaI: PairD, bSize: Int = hp.default ("bSize").toInt, maxEpochs: Int = hp.default ("maxEpochs").toInt, f1: AFF = f_sigmoid, f2: AFF = f_lreLU): (Double, Int)

    Given training data 'x' and 'y' for a 3-layer Neural Network, fit the parameters (weights and biases) 'a' & 'b'.

    Given training data 'x' and 'y' for a 3-layer Neural Network, fit the parameters (weights and biases) 'a' & 'b'. Select the best learning rate within the interval 'etaI'.

    x

    the m-by-nx input matrix (training data consisting of m input vectors)

    y

    the m-by-ny output matrix (training data consisting of m output vectors)

    a

    the paramters with nx-by-nz weight matrix & nz bias vector for layer 0->1

    b

    the paramters with nz-by-ny weight matrix & ny bias vector for layer 1->2

    etaI

    the learning/convergence rate interval

    bSize

    the batch size

    maxEpochs

    the maximum number of training epochs/iterations

    f1

    the activation function family for layers 0->1 (input to hidden)

    f2

    the activation function family for layers 1->2 (hidden to output

  20. def optimizeI(x: MatriD, y: VectoD, b: VectoD, etaI: PairD, bSize: Int = hp.default ("bSize").toInt, maxEpochs: Int = hp.default ("maxEpochs").toInt, f1: AFF = f_sigmoid): (Double, Int)

    Given training data 'x' and 'y' for a 2-layer, single output Neural Network, fit the parameter/weight vector 'b'.

    Given training data 'x' and 'y' for a 2-layer, single output Neural Network, fit the parameter/weight vector 'b'. Select the best learning rate within the interval 'etaI'.

    x

    the m-by-nx input matrix (training data consisting of m input vectors)

    y

    the m output vector (training data consisting of m output vectors)

    b

    the nx parameter/weight vector for layer 1->2 (input to output)

    etaI

    the learning/convergence rate interval

    bSize

    the batch size

    maxEpochs

    the maximum number of training epochs/iterations

    f1

    the activation function family for layers 1->2 (input to output)

  21. def optimizeX(x: MatriD, y: MatriD, b: NetParams, eta_: Double = hp.default ("eta"), bSize: Int = hp.default ("bSize").toInt, maxEpochs: Int = hp.default ("maxEpochs").toInt, lambda: Double = 0.0, f: Array[AFF] = ...): (Double, Int)

    Given training data 'x' and 'y' for a multi-hidden layer Neural Network, fit the parameter array 'b', where each 'b(l)' contains a weight matrix and bias vector.

    Given training data 'x' and 'y' for a multi-hidden layer Neural Network, fit the parameter array 'b', where each 'b(l)' contains a weight matrix and bias vector. Iterate over several epochs, where each epoch divides the training set into 'nB' batches. Each batch is used to update the weights.

    x

    the m-by-nx input matrix (training data consisting of m input vectors)

    y

    the m-by-ny output matrix (training data consisting of m output vectors)

    b

    the array of parameters (weights & biases) between every two adjacent layers

    eta_

    the initial learning/convergence rate

    bSize

    the batch size

    maxEpochs

    the maximum number of training epochs/iterations

    f

    the array of activation function family for every two adjacent layers

  22. def optimizeXI(x: MatriD, y: MatriD, b: NetParams, etaI: PairD, bSize: Int = hp.default ("bSize").toInt, maxEpochs: Int = hp.default ("maxEpochs").toInt, lambda: Double = 0.0, f: Array[AFF] = ...): (Double, Int)

    Given training data 'x' and 'y' for a multi-hidden layer Neural Network, fit the parameter array 'b', where each 'b(l)' contains a weight matrix and bias vector.

    Given training data 'x' and 'y' for a multi-hidden layer Neural Network, fit the parameter array 'b', where each 'b(l)' contains a weight matrix and bias vector. Select the best learning rate within the interval 'etaI'.

    x

    the m-by-nx input matrix (training data consisting of m input vectors)

    y

    the m-by-ny output matrix (training data consisting of m output vectors)

    b

    the array of parameters (weights & biases) between every two adjacent layers

    etaI

    the lower and upper bounds of learning/convergence rate

    bSize

    the batch size

    maxEpochs

    the maximum number of training epochs/iterations

    f

    the array of activation function family for every two adjacent layers

  23. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  24. def toString(): String
    Definition Classes
    AnyRef → Any
  25. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  26. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  27. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Deprecated Value Members

  1. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] ) @Deprecated
    Deprecated

Inherited from Error

Inherited from AnyRef

Inherited from Any

Ungrouped