class RecurrentNeuralNet extends Error
The RecurrentNeuralNet
class feeds input in sequential time into hidden layer.
It uses parameter U, W, V in network.
where U is parameter for input x, W is for hidden layer z, and V is for output y
We have 'St = Activate (U dot x(t) + W dot x(t-1))' and
'y(t) = softmax(V dot St)'
- See also
github.com/pangolulu/rnn-from-scratch ----------------------------------------------------------------------------
- Alphabetic
- By Inheritance
- RecurrentNeuralNet
- Error
- Throwable
- Serializable
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Instance Constructors
-
new
RecurrentNeuralNet(data_dim: Int, hidden_dim: Int, bptt_truncate: Int = 4)
- data_dim
the dimension of the data space
- hidden_dim
the dimension of the hidden layer
- bptt_truncate
truncate bptt, clip to constrain the dependcy to avoid gradient vanish/explode
Value Members
-
final
def
!=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
##(): Int
- Definition Classes
- AnyRef → Any
-
final
def
==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
addSuppressed(arg0: Throwable): Unit
- Definition Classes
- Throwable
-
final
def
asInstanceOf[T0]: T0
- Definition Classes
- Any
-
def
bptt(x: VectoD, label: VectoD): (MatrixD, MatrixD, MatrixD)
Use back propogation through time 'bptt' to calculates dl/dV, dl/dU, dl/dW where l is the loss.
Use back propogation through time 'bptt' to calculates dl/dV, dl/dU, dl/dW where l is the loss.
- x
the input data
- label
the class labels (given ouput values)
-
def
calculate_loss(x: VectoD, label: VectoD): Double
Calculate the loss from the prediction of 'x' and 'label' by adding up the prediction loss among rnn layers.
Calculate the loss from the prediction of 'x' and 'label' by adding up the prediction loss among rnn layers.
- x
the input data
- label
the class labels (given ouput values)
-
def
calculate_total_loss(x: List[VectoD], label: List[VectoD]): Double
Calculate the total loss.
Calculate the total loss.
- x
the input data
- label
the class labels (given ouput values)
-
def
clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native() @HotSpotIntrinsicCandidate()
-
final
def
eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
def
equals(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
def
fillInStackTrace(): Throwable
- Definition Classes
- Throwable
-
def
forward_propagation(x: VectoD): List[RecurrentNeuralNetLayer]
Forward the input and generate several RNN layers.
Forward the input and generate several RNN layers.
- x
the data input
-
def
getCause(): Throwable
- Definition Classes
- Throwable
-
final
def
getClass(): Class[_]
- Definition Classes
- AnyRef → Any
- Annotations
- @native() @HotSpotIntrinsicCandidate()
-
def
getLocalizedMessage(): String
- Definition Classes
- Throwable
-
def
getMessage(): String
- Definition Classes
- Throwable
-
def
getStackTrace(): Array[StackTraceElement]
- Definition Classes
- Throwable
-
final
def
getSuppressed(): Array[Throwable]
- Definition Classes
- Throwable
-
def
hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native() @HotSpotIntrinsicCandidate()
-
def
initCause(arg0: Throwable): Throwable
- Definition Classes
- Throwable
-
final
def
isInstanceOf[T0]: Boolean
- Definition Classes
- Any
-
final
def
ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
final
def
notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native() @HotSpotIntrinsicCandidate()
-
final
def
notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native() @HotSpotIntrinsicCandidate()
-
def
printStackTrace(arg0: PrintWriter): Unit
- Definition Classes
- Throwable
-
def
printStackTrace(arg0: PrintStream): Unit
- Definition Classes
- Throwable
-
def
printStackTrace(): Unit
- Definition Classes
- Throwable
- var rvm: RandomMatD
-
def
setStackTrace(arg0: Array[StackTraceElement]): Unit
- Definition Classes
- Throwable
-
def
sgd_step(x: VectoD, label: VectoD, learning_rate: Double): MatrixD
Stochastic gradient descent step.
Stochastic gradient descent step.
- x
the input data
- label
the class labels (given ouput values)
-
final
def
synchronized[T0](arg0: ⇒ T0): T0
- Definition Classes
- AnyRef
-
def
toString(): String
- Definition Classes
- Throwable → AnyRef → Any
-
def
train(x: List[VectoD], label: List[VectoD], rate: Double = 500.0, nepoch: Int, eval_loss_after: Int = 5): Unit
Train the model by iterating throught the training set by sgd and adjusting the learning rate.
Train the model by iterating throught the training set by sgd and adjusting the learning rate.
- x
the input data
- label
the class labels (given ouput values)
- rate
the initial learning rate (gradient multiplier)
- nepoch
number of epoch
- eval_loss_after
number of epoch to conduct evaluation
- val u: MatrixD
- val v: MatrixD
- val w: MatrixD
-
final
def
wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()
-
final
def
wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
Deprecated Value Members
-
def
finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( classOf[java.lang.Throwable] ) @Deprecated
- Deprecated