Packages

object Optimizer

The Optimizer object provides functions to optimize the parameters/weights of Neural Networks with various numbers of layers.

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. Optimizer
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  5. def assign(aa: MatriD, bb: MatriD): Unit

    Deep assign matrix 'bb' to matrix 'aa' '(aa = bb)'.

  6. def clone(): AnyRef
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @native() @throws( ... )
  7. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  8. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  9. def finalize(): Unit
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  10. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  11. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  12. val hp: HyperParameter
  13. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  14. def limitF(rows: Int): Double
  15. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  16. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  17. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  18. def optimize(x: MatriD, y: VectoD, b: VectoD, eta_: Double = hp.default ("eta"), bSize: Int = hp.default ("bSize").toInt, maxEpochs: Int = hp.default ("maxEpochs").toInt, f1: AFF = f_sigmoid): (Double, Int)

    Given training data 'x' and 'y' for a 2-layer, single output neural network, fit the parameter/weight vector 'b'.

    Given training data 'x' and 'y' for a 2-layer, single output neural network, fit the parameter/weight vector 'b'. Iterate over several epochs, where each epoch divides the training set into 'nbat' batches. Each batch is used to update the weights.

    x

    the m-by-nx input matrix (training data consisting of m input vectors)

    y

    the m output vector (training data consisting of m output vectors)

    b

    the nx parameter/weight vector for layer 1->2 (input to output)

    eta_

    the initial learning/convergence rate

    bSize

    the batch size

    maxEpochs

    the maximum number of training epochs/iterations

    f1

    the activation function family for layers 1->2 (input to output)

  19. def optimize2(x: MatriD, y: MatriD, bb: MatriD, eta_: Double = hp.default ("eta"), bSize: Int = hp.default ("bSize").toInt, maxEpochs: Int = hp.default ("maxEpochs").toInt, f1: AFF = f_sigmoid): (Double, Int)

    Given training data 'x' and 'y' for a 2-layer neural network, fit the parameter/weight matrix 'bb'.

    Given training data 'x' and 'y' for a 2-layer neural network, fit the parameter/weight matrix 'bb'. Iterate over several epochs, where each epoch divides the training set into 'nbat' batches. Each batch is used to update the weights.

    x

    the m-by-nx input matrix (training data consisting of m input vectors)

    y

    the m-by-ny output matrix (training data consisting of m output vectors)

    bb

    the nx-by-ny parameter/weight matrix for layer 1->2 (input to output)

    eta_

    the initial learning/convergence rate

    bSize

    the batch size

    maxEpochs

    the maximum number of training epochs/iterations

    f1

    the activation function family for layers 1->2 (input to output)

  20. def optimize2I(x: MatriD, y: MatriD, bb: MatriD, etaI: (Double, Double), bSize: Int = hp.default ("bSize").toInt, maxEpochs: Int = hp.default ("maxEpochs").toInt, f1: AFF = f_sigmoid): (Double, Int)

    Given training data 'x' and 'y' for a 2-layer, single output neural network, fit the parameter/weight vector 'b'.

    Given training data 'x' and 'y' for a 2-layer, single output neural network, fit the parameter/weight vector 'b'. Iterate over several epochs, where each epoch divides the training set into 'nbat' batches. Each batch is used to update the weights.

    x

    the m-by-nx input matrix (training data consisting of m input vectors)

    y

    the m output vector (training data consisting of m output vectors)

    bb

    the nx-by-ny parameter/weight matrix for layer 1->2 (input to output)

    bSize

    the batch size

    maxEpochs

    the maximum number of training epochs/iterations

    f1

    the activation function family for layers 1->2 (input to output)

  21. def optimize3(x: MatriD, y: MatriD, aa: MatriD, bb: MatriD, eta_: Double = hp.default ("eta"), bSize: Int = hp.default ("bSize").toInt, maxEpochs: Int = hp.default ("maxEpochs").toInt, f1: AFF = f_sigmoid, f2: AFF = f_sigmoid): (Double, Int)

    Given training data 'x' and 'y' for a 3-layer neural network, fit the parameter/weight matrices 'aa' and 'bb'.

    Given training data 'x' and 'y' for a 3-layer neural network, fit the parameter/weight matrices 'aa' and 'bb'. Iterate over several epochs, where each epoch divides the training set into 'nbat' batches. Each batch is used to update the weights.

    x

    the m-by-nx input matrix (training data consisting of m input vectors)

    y

    the m-by-ny output matrix (training data consisting of m output vectors)

    aa

    the nx-by-nz parameter/weight matrix for layer 1->2 (input to hidden)

    bb

    the nz-by-ny parameter/weight matrix for layer 2->3 (hidden to output)

    eta_

    the initial learning/convergence rate

    bSize

    the batch size

    maxEpochs

    the maximum number of training epochs/iterations

    f1

    the activation function family for layers 1->2 (input to hidden)

    f2

    the activation function family for layers 2->3 (hidden to output)

  22. def optimize3I(x: MatriD, y: MatriD, aa: MatriD, bb: MatriD, etaI: (Double, Double), bSize: Int = hp.default ("bSize").toInt, maxEpochs: Int = hp.default ("maxEpochs").toInt, f1: AFF = f_sigmoid, f2: AFF = f_sigmoid): (Double, Int)

    Given training data 'x' and 'y' for a 2-layer, single output neural network, fit the parameter/weight vector 'b'.

    Given training data 'x' and 'y' for a 2-layer, single output neural network, fit the parameter/weight vector 'b'. Iterate over several epochs, where each epoch divides the training set into 'nbat' batches. Each batch is used to update the weights.

    x

    the m-by-nx input matrix (training data consisting of m input vectors)

    y

    the m output vector (training data consisting of m output vectors)

    aa

    the nx-by-nz parameter/weight matrix for layer 1->2 (input to hidden)

    bb

    the nx-by-ny parameter/weight matrix for layer 1->2 (input to output)

    maxEpochs

    the maximum number of training epochs/iterations

    f1

    the activation function family for layers 1->2 (input to output)

  23. def optimizeI(x: MatriD, y: VectoD, b: VectoD, etaI: (Double, Double), bSize: Int = hp.default ("bSize").toInt, maxEpochs: Int = hp.default ("maxEpochs").toInt, f1: AFF = f_sigmoid): (Double, Int)

    Given training data 'x' and 'y' for a 2-layer, single output neural network, fit the parameter/weight vector 'b'.

    Given training data 'x' and 'y' for a 2-layer, single output neural network, fit the parameter/weight vector 'b'. Iterate over several epochs, where each epoch divides the training set into 'nbat' batches. Each batch is used to update the weights.

    x

    the m-by-nx input matrix (training data consisting of m input vectors)

    y

    the m output vector (training data consisting of m output vectors)

    b

    the nx parameter/weight vector for layer 1->2 (input to output)

    bSize

    the batch size

    maxEpochs

    the maximum number of training epochs/iterations

    f1

    the activation function family for layers 1->2 (input to output)

  24. def sseF(y: MatriD, yp: MatriD): Double

    Compute the sum of squared errors (sse).

    Compute the sum of squared errors (sse).

    y

    the actual response/output matrix

    yp

    the predicted response/output matrix

  25. def sseF(y: VectoD, yp: VectoD): Double

    Compute the sum of squared errors (sse).

    Compute the sum of squared errors (sse).

    y

    the actual response/output vector

    yp

    the predicted response/output vector

  26. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  27. def toString(): String
    Definition Classes
    AnyRef → Any
  28. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  29. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  30. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @native() @throws( ... )
  31. def weightMat(rows: Int, cols: Int, stream: Int = 0, limit: Double = -1.0): MatriD

    Generate a random weight/parameter matrix with elements values in (0, limit).

    Generate a random weight/parameter matrix with elements values in (0, limit).

    rows

    the number of rows

    cols

    the number of columns

    stream

    the random number stream to use

    limit

    the maximum value for any weight

  32. def weightVec(rows: Int, stream: Int = 0, limit: Double = -1.0): VectoD

    Generate a random weight/parameter matrix with elements values in (0, limit).

    Generate a random weight/parameter matrix with elements values in (0, limit).

    rows

    the number of rows

    stream

    the random number stream to use

    limit

    the maximum value for any weight

Inherited from AnyRef

Inherited from Any

Ungrouped