Packages

c

scalation.analytics

RecurrentNeuralNet

class RecurrentNeuralNet extends Error

The RecurrentNeuralNet class feeds input in sequential time into hidden layer. It uses parameter U, W, V in network. where U is parameter for input x, W is for hidden layer z, and V is for output y We have 'St = Activate (U dot x(t) + W dot x(t-1))' and 'y(t) = softmax(V dot St)'

See also

github.com/pangolulu/rnn-from-scratch ----------------------------------------------------------------------------

Linear Supertypes
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. RecurrentNeuralNet
  2. Error
  3. Throwable
  4. Serializable
  5. AnyRef
  6. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. Protected

Instance Constructors

  1. new RecurrentNeuralNet(data_dim: Int, hidden_dim: Int, bptt_truncate: Int = 4)

    data_dim

    the dimension of the data space

    hidden_dim

    the dimension of the hidden layer

    bptt_truncate

    truncate bptt, clip to constrain the dependcy to avoid gradient vanish/explode

Value Members

  1. final def addSuppressed(arg0: Throwable): Unit
    Definition Classes
    Throwable
  2. def bptt(x: VectoD, label: VectoD): (MatrixD, MatrixD, MatrixD)

    Use back propogation through time 'bptt' to calculates dl/dV, dl/dU, dl/dW where l is the loss.

    Use back propogation through time 'bptt' to calculates dl/dV, dl/dU, dl/dW where l is the loss.

    x

    the input data

    label

    the class labels (given ouput values)

  3. def calculate_loss(x: VectoD, label: VectoD): Double

    Calculate the loss from the prediction of 'x' and 'label' by adding up the prediction loss among rnn layers.

    Calculate the loss from the prediction of 'x' and 'label' by adding up the prediction loss among rnn layers.

    x

    the input data

    label

    the class labels (given ouput values)

  4. def calculate_total_loss(x: List[VectoD], label: List[VectoD]): Double

    Calculate the total loss.

    Calculate the total loss.

    x

    the input data

    label

    the class labels (given ouput values)

  5. def fillInStackTrace(): Throwable
    Definition Classes
    Throwable
  6. def forward_propagation(x: VectoD): List[RecurrentNeuralNetLayer]

    Forward the input and generate several RNN layers.

    Forward the input and generate several RNN layers.

    x

    the data input

  7. def getCause(): Throwable
    Definition Classes
    Throwable
  8. def getLocalizedMessage(): String
    Definition Classes
    Throwable
  9. def getMessage(): String
    Definition Classes
    Throwable
  10. def getStackTrace(): Array[StackTraceElement]
    Definition Classes
    Throwable
  11. final def getSuppressed(): Array[Throwable]
    Definition Classes
    Throwable
  12. def initCause(arg0: Throwable): Throwable
    Definition Classes
    Throwable
  13. def printStackTrace(arg0: PrintWriter): Unit
    Definition Classes
    Throwable
  14. def printStackTrace(arg0: PrintStream): Unit
    Definition Classes
    Throwable
  15. def printStackTrace(): Unit
    Definition Classes
    Throwable
  16. var rvm: RandomMatD
  17. def setStackTrace(arg0: Array[StackTraceElement]): Unit
    Definition Classes
    Throwable
  18. def sgd_step(x: VectoD, label: VectoD, learning_rate: Double): MatrixD

    Stochastic gradient descent step.

    Stochastic gradient descent step.

    x

    the input data

    label

    the class labels (given ouput values)

  19. def toString(): String
    Definition Classes
    Throwable → AnyRef → Any
  20. def train(x: List[VectoD], label: List[VectoD], rate: Double = 500.0, nepoch: Int, eval_loss_after: Int = 5): RecurrentNeuralNet

    Train the model by iterating throught the training set by sgd and adjusting the learning rate.

    Train the model by iterating throught the training set by sgd and adjusting the learning rate.

    x

    the input data

    label

    the class labels (given ouput values)

    rate

    the initial learning rate (gradient multiplier)

    nepoch

    number of epoch

    eval_loss_after

    number of epoch to conduct evaluation

  21. val u: MatrixD
  22. val v: MatrixD
  23. val w: MatrixD