scalation.modeling.neuralnet

Members list

Type members

Classlikes

class CNN_1D(x: MatrixD, y: MatrixD, fname_: Array[String], nf: Int, nc: Int, hparam: HyperParameter, f: AFF, f1: AFF, val itran: FunctionM2M) extends PredictorMV, Fit

The CNN_1D class implements a Convolutionsl Network model. The model is trained using a data matrix x and response matrix y.

The CNN_1D class implements a Convolutionsl Network model. The model is trained using a data matrix x and response matrix y.

Value parameters

f

the activation function family for layers 1->2 (input to hidden)

f1

the activation function family for layers 2->3 (hidden to output)

fname_

the feature/variable names (defaults to null)

hparam

the hyper-parameters for the model/network

itran

the inverse transformation function returns responses to original scale

nc

the width of the filters (size of cofilters)

nf

the number of filters for this convolutional layer

x

the input/data matrix with instances stored in rows

y

the output/response matrix, where y_i = response for row i of matrix x

Attributes

Companion
object
Supertypes
trait Fit
trait FitM
trait PredictorMV
trait Model
class Object
trait Matchable
class Any
Show all
object CNN_1D

The CNN_1D companion object provides factory methods for creating 1D convolutional neural networks.

The CNN_1D companion object provides factory methods for creating 1D convolutional neural networks.

Attributes

Companion
class
Supertypes
class Object
trait Matchable
class Any
Self type
CNN_1D.type
class CoFilter_1D(width: Int)

The CoFilter_1D class provides a convolution filter (cofilter) for taking a weighted average over a window of an input vector.

The CoFilter_1D class provides a convolution filter (cofilter) for taking a weighted average over a window of an input vector.

Value parameters

width

the width of the cofilter

Attributes

Companion
object
Supertypes
class Object
trait Matchable
class Any
object CoFilter_1D

The CoFilter_1D object provides the convolution and pooling operators.

The CoFilter_1D object provides the convolution and pooling operators.

Attributes

Companion
class
Supertypes
class Object
trait Matchable
class Any
Self type
class ELM_3L1(x: MatrixD, y: VectorD, fname_: Array[String], var nz: Int, hparam: HyperParameter, f: AFF, val itran: FunctionV2V) extends Predictor, Fit

The ELM_3L1 class supports single-output, 3-layer (input, hidden and output) Extreme-Learning Machines. It can be used for both classification and prediction, depending on the activation functions used. Given several input vectors and output vectors (training data), fit the parameters a and b connecting the layers, so that for a new input vector v, the net can predict the output value, i.e., yp = b * f (a * v) where f is the activation function and the parameter a and b are the parameters between input-hidden and hidden-output layers. Like Perceptron which adds input 'x0 = 1' to account for the intercept/bias, ELM_3L1 explicitly adds bias.

The ELM_3L1 class supports single-output, 3-layer (input, hidden and output) Extreme-Learning Machines. It can be used for both classification and prediction, depending on the activation functions used. Given several input vectors and output vectors (training data), fit the parameters a and b connecting the layers, so that for a new input vector v, the net can predict the output value, i.e., yp = b * f (a * v) where f is the activation function and the parameter a and b are the parameters between input-hidden and hidden-output layers. Like Perceptron which adds input 'x0 = 1' to account for the intercept/bias, ELM_3L1 explicitly adds bias.

Value parameters

f

the activation function family for layers 1->2 (input to hidden)

fname_

the feature/variable names (if null, use x_j's)

hparam

the hyper-parameters for the model/network

itran

the inverse transformation function returns responses to original scale

nz

the number of nodes in hidden layer (-1 => use default formula)

x

the m-by-n input matrix (training data consisting of m input vectors)

y

the m output vector (training data consisting of m output scalars)

Attributes

Companion
object
Supertypes
trait Fit
trait FitM
trait Predictor
trait Model
class Object
trait Matchable
class Any
Show all
object ELM_3L1 extends Scaling

The ELM_3L1 companion object provides factory methods for creating three-layer (one hidden layer) extreme learning machines. Note, 'scale' is defined in Scalaing.

The ELM_3L1 companion object provides factory methods for creating three-layer (one hidden layer) extreme learning machines. Note, 'scale' is defined in Scalaing.

Attributes

Companion
class
Supertypes
trait Scaling
class Object
trait Matchable
class Any
Self type
ELM_3L1.type

The ExampleConcrete class stores a medium-sized example dataset from the UCI Machine Learning Repository, "Abstract: Concrete is a highly complex material. The slump flow of concrete is not only determined by the water content, but that is also influenced by other concrete ingredients."

The ExampleConcrete class stores a medium-sized example dataset from the UCI Machine Learning Repository, "Abstract: Concrete is a highly complex material. The slump flow of concrete is not only determined by the water content, but that is also influenced by other concrete ingredients."

Attributes

Supertypes
class Object
trait Matchable
class Any
Self type
case class NetParam(var w: MatrixD, var b: VectorD)

The NetParam class bundles parameter weights and biases together.

The NetParam class bundles parameter weights and biases together.

Value parameters

b

the bias/intercept vector

w

the weight matrix

Attributes

Supertypes
trait Serializable
trait Product
trait Equals
class Object
trait Matchable
class Any
Show all
class NeuralNet_2L(x: MatrixD, y: MatrixD, fname_: Array[String], hparam: HyperParameter, f: AFF, val itran: FunctionM2M) extends PredictorMV, Fit

The NeuralNet_2L class supports multi-output, 2-layer (input and output) Neural-Networks. It can be used for both classification and prediction, depending on the activation functions used. Given several input vectors and output vectors (training data), fit the weights/parameters b connecting the layers, so that for a new input vector z, the net can predict the output value, i.e., yp_j = f (b dot z) where f is the activation function and the parameters b gives the weights between input and output layers. NOTE, b0 is treated as the bias, so x0 must be 1.0.

The NeuralNet_2L class supports multi-output, 2-layer (input and output) Neural-Networks. It can be used for both classification and prediction, depending on the activation functions used. Given several input vectors and output vectors (training data), fit the weights/parameters b connecting the layers, so that for a new input vector z, the net can predict the output value, i.e., yp_j = f (b dot z) where f is the activation function and the parameters b gives the weights between input and output layers. NOTE, b0 is treated as the bias, so x0 must be 1.0.

Value parameters

f

the activation function family for layers 1->2 (input to output)

fname_

the feature/variable names (if null, use x_j's)

hparam

the hyper-parameters for the model/network

itran

the inverse transformation function returns response matrix to original scale

x

the m-by-n input/data matrix (training data consisting of m input vectors)

y

the m-by-ny output/response matrix (training data consisting of m output vectors)

Attributes

Companion
object
Supertypes
trait Fit
trait FitM
trait PredictorMV
trait Model
class Object
trait Matchable
class Any
Show all
object NeuralNet_2L extends Scaling

The NeuralNet_2L companion object provides factory methods for creating two-layer (no hidden layer) neural networks. Note, 'scale' is defined in Scaling.

The NeuralNet_2L companion object provides factory methods for creating two-layer (no hidden layer) neural networks. Note, 'scale' is defined in Scaling.

Attributes

Companion
class
Supertypes
trait Scaling
class Object
trait Matchable
class Any
Self type
class NeuralNet_3L(x: MatrixD, y: MatrixD, fname_: Array[String], var nz: Int, hparam: HyperParameter, f: AFF, f1: AFF, val itran: FunctionM2M) extends PredictorMV, Fit

The NeuralNet_3L class supports multi-output, 3-layer (input, hidden and output) Neural-Networks. It can be used for both classification and prediction, depending on the activation functions used. Given several input vectors and output vectors (training data), fit the parameters a and b connecting the layers, so that for a new input vector v, the net can predict the output value, i.e., yp = f1 (b * f (a * v)) where f and f1 are the activation functions and the parameter a and b are the parameters between input-hidden and hidden-output layers. Unlike NeuralNet_2L which adds input x0 = 1 to account for the intercept/bias, NeuralNet_3L explicitly adds bias.

The NeuralNet_3L class supports multi-output, 3-layer (input, hidden and output) Neural-Networks. It can be used for both classification and prediction, depending on the activation functions used. Given several input vectors and output vectors (training data), fit the parameters a and b connecting the layers, so that for a new input vector v, the net can predict the output value, i.e., yp = f1 (b * f (a * v)) where f and f1 are the activation functions and the parameter a and b are the parameters between input-hidden and hidden-output layers. Unlike NeuralNet_2L which adds input x0 = 1 to account for the intercept/bias, NeuralNet_3L explicitly adds bias.

Value parameters

f

the activation function family for layers 1->2 (input to output)

f1

the activation function family for layers 2->3 (hidden to output)

fname_

the feature/variable names (if null, use x_j's)

hparam

the hyper-parameters for the model/network

itran

the inverse transformation function returns response matrix to original scale

nz

the number of nodes in hidden layer (-1 => use default formula)

x

the m-by-n input/data matrix (training data consisting of m input vectors)

y

the m-by-ny output/response matrix (training data consisting of m output vectors)

Attributes

Companion
object
Supertypes
trait Fit
trait FitM
trait PredictorMV
trait Model
class Object
trait Matchable
class Any
Show all
object NeuralNet_3L extends Scaling

The NeuralNet_3L companion object provides factory methods for creating three-layer (one hidden layer) neural networks. Note, 'scale' is defined in Scaling.

The NeuralNet_3L companion object provides factory methods for creating three-layer (one hidden layer) neural networks. Note, 'scale' is defined in Scaling.

Attributes

Companion
class
Supertypes
trait Scaling
class Object
trait Matchable
class Any
Self type
class NeuralNet_XL(x: MatrixD, y: MatrixD, fname_: Array[String], var nz: Array[Int], hparam: HyperParameter, f: Array[AFF], val itran: FunctionM2M) extends PredictorMV, Fit

The NeuralNet_XL class supports multi-output, X-layer (input, hidden(+) and output) Neural-Networks. It can be used for both classification and prediction, depending on the activation functions used. Given several input vectors and output vectors (training data), fit the parameters [b] connecting the layers, so that for a new input vector v, the net can predict the output value, e.g., yp = f3 (c * f2 (b * f (a * v))) where f, f2 and f3 are the activation functions and the parameter a, b and b are the parameters between input-hidden1, hidden1-hidden2 and hidden2-output layers. Unlike NeuralNet_2L which adds input x0 = 1 to account for the intercept/bias, NeuralNet_XL explicitly adds bias. Defaults to two hidden layers. This implementation is partially adapted from Michael Nielsen's Python implementation found in

The NeuralNet_XL class supports multi-output, X-layer (input, hidden(+) and output) Neural-Networks. It can be used for both classification and prediction, depending on the activation functions used. Given several input vectors and output vectors (training data), fit the parameters [b] connecting the layers, so that for a new input vector v, the net can predict the output value, e.g., yp = f3 (c * f2 (b * f (a * v))) where f, f2 and f3 are the activation functions and the parameter a, b and b are the parameters between input-hidden1, hidden1-hidden2 and hidden2-output layers. Unlike NeuralNet_2L which adds input x0 = 1 to account for the intercept/bias, NeuralNet_XL explicitly adds bias. Defaults to two hidden layers. This implementation is partially adapted from Michael Nielsen's Python implementation found in

Value parameters

f

the array of activation function families between every pair of layers

fname_

the feature/variable names (if null, use x_j's)

hparam

the hyper-parameters for the model/network

itran

the inverse transformation function returns response matrix to original scale

nz

the number of nodes in each hidden layer, e.g., Array (9, 8) => 2 hidden of sizes 9 and 8 (null => use default formula)

x

the m-by-n input/data matrix (training data consisting of m input vectors)

y

the m-by-ny output/response matrix (training data consisting of m output vectors)

Attributes

See also

github.com/mnielsen/neural-networks-and-deep-learning/blob/master/src/network2.py

Companion
object
Supertypes
trait Fit
trait FitM
trait PredictorMV
trait Model
class Object
trait Matchable
class Any
Show all
Known subtypes
object NeuralNet_XL extends Scaling

The NeuralNet_XL companion object provides factory methods for creating multi-layer (one+ hidden layers) neural networks. Note, 'scale' is defined in Scaling.

The NeuralNet_XL companion object provides factory methods for creating multi-layer (one+ hidden layers) neural networks. Note, 'scale' is defined in Scaling.

Attributes

Companion
class
Supertypes
trait Scaling
class Object
trait Matchable
class Any
Self type
class NeuralNet_XLT(x: MatrixD, y: MatrixD, fname_: Array[String], nz: Array[Int], hparam: HyperParameter, f: Array[AFF], l_tran: Int, transfer: NetParam, itran: FunctionM2M) extends NeuralNet_XL

The NeuralNet_XLT class supports multi-output, multi-layer (input, {hidden} and output) Neural-Networks with Transfer Learning. A layer (first hidden by default) from a neural- network model trained on a related dataset is transferred into that position in this model. Given several input vectors and output vectors (training data), fit the parameters b connecting the layers, so that for a new input vector v, the net can predict the output vector. Caveat: currently only allows the transfer of one layer.

The NeuralNet_XLT class supports multi-output, multi-layer (input, {hidden} and output) Neural-Networks with Transfer Learning. A layer (first hidden by default) from a neural- network model trained on a related dataset is transferred into that position in this model. Given several input vectors and output vectors (training data), fit the parameters b connecting the layers, so that for a new input vector v, the net can predict the output vector. Caveat: currently only allows the transfer of one layer.

Value parameters

f

the array of activation function families between every pair of layers

fname_

the feature/variable names (defaults to null)

hparam

the hyper-parameters for the model/network

itran

the inverse transformation function returns responses to original scale

l_tran

the layer to be transferred in (defaults to first hidden layer)

nz

the number of nodes in each hidden layer, e.g., Array (9, 8) => 2 hidden of sizes 9 and 8 (null => use default formula)

transfer

the saved network parameters from a layer of a related neural network trim before passing in if the size does not match

x

the m-by-n input matrix (training data consisting of m input vectors)

y

the m-by-ny output matrix (training data consisting of m output vectors)

Attributes

Companion
object
Supertypes
class NeuralNet_XL
trait Fit
trait FitM
trait PredictorMV
trait Model
class Object
trait Matchable
class Any
Show all
object NeuralNet_XLT extends Scaling

The NeuralNet_XLT companion object provides factory methods for creating multi-layer (one+ hidden layers) neural networks supporting transfer learning. Note, 'scale' is defined in Scaling.

The NeuralNet_XLT companion object provides factory methods for creating multi-layer (one+ hidden layers) neural networks supporting transfer learning. Note, 'scale' is defined in Scaling.

Attributes

Companion
class
Supertypes
trait Scaling
class Object
trait Matchable
class Any
Self type
object Optimizer

The Optimizer object gives defaults for hyper-parameters as well as other adjustable program constants.

The Optimizer object gives defaults for hyper-parameters as well as other adjustable program constants.

Attributes

Companion
trait
Supertypes
class Object
trait Matchable
class Any
Self type
Optimizer.type

The Optimizer trait provides methods to optimize and auto_optimize parameters. Given training data x and y for a Neural Network, fit the parameters b.

The Optimizer trait provides methods to optimize and auto_optimize parameters. Given training data x and y for a Neural Network, fit the parameters b.

Attributes

Companion
object
Supertypes
trait StoppingRule
trait MonitorLoss
class Object
trait Matchable
class Any
Known subtypes
class Optimizer_Adam extends Optimizer

The Optimizer_Adam class provides functions to optimize the parameters (weights and biases) of Neural Networks with various numbers of layers. This optimizer implements a

The Optimizer_Adam class provides functions to optimize the parameters (weights and biases) of Neural Networks with various numbers of layers. This optimizer implements a

Attributes

See also
Supertypes
trait Optimizer
trait StoppingRule
trait MonitorLoss
class Object
trait Matchable
class Any
Show all
class Optimizer_SGD extends Optimizer

The Optimizer-SGD class provides methods to optimize the parameters (weights and biases) of Neural Networks with various numbers of layers. This optimizer implements a Stochastic Gradient Descent algorithm.

The Optimizer-SGD class provides methods to optimize the parameters (weights and biases) of Neural Networks with various numbers of layers. This optimizer implements a Stochastic Gradient Descent algorithm.

Attributes

Supertypes
trait Optimizer
trait StoppingRule
trait MonitorLoss
class Object
trait Matchable
class Any
Show all
class Optimizer_SGDM extends Optimizer

The Optimizer_SGDM class provides functions to optimize the parameters (weights and biases) of Neural Networks with various numbers of layers. This optimizer implements a Stochastic Gradient Descent with Momentum algorithm.

The Optimizer_SGDM class provides functions to optimize the parameters (weights and biases) of Neural Networks with various numbers of layers. This optimizer implements a Stochastic Gradient Descent with Momentum algorithm.

Attributes

Supertypes
trait Optimizer
trait StoppingRule
trait MonitorLoss
class Object
trait Matchable
class Any
Show all
trait PredictorMV(x: MatrixD, y: MatrixD, var fname: Array[String], hparam: HyperParameter) extends Model

The PredictorMV trait provides a framwork for multiple predictive analytics techniques, e.g., Multi-variate Regression and Neural Netoworks. x is multi-dimensional [1, x_1, ... x_k] and so is y. Fit the NetParam parameters bb in for example the regression equation y = f(bb dot x) + e bb is an array of NetParam where each component is a weight matrix and a bias vector.

The PredictorMV trait provides a framwork for multiple predictive analytics techniques, e.g., Multi-variate Regression and Neural Netoworks. x is multi-dimensional [1, x_1, ... x_k] and so is y. Fit the NetParam parameters bb in for example the regression equation y = f(bb dot x) + e bb is an array of NetParam where each component is a weight matrix and a bias vector.

Value parameters

fname

the feature/variable names (if null, use x_j's)

hparam

the hyper-parameters for the model/network

x

the input/data m-by-n matrix (augment with a first column of ones to include intercept in model or use bias)

y

the response/output m-by-ny matrix

Attributes

See also

NetParam

Companion
object
Supertypes
trait Model
class Object
trait Matchable
class Any
Known subtypes
class CNN_1D
class NeuralNet_2L
class NeuralNet_3L
class NeuralNet_XL
class RegressionMV
Show all
object PredictorMV

The PredictorMV companion object provides a method for testing predictive models.

The PredictorMV companion object provides a method for testing predictive models.

Attributes

Companion
trait
Supertypes
class Object
trait Matchable
class Any
Self type
class RegressionMV(x: MatrixD, y: MatrixD, fname_: Array[String], hparam: HyperParameter) extends PredictorMV, Fit

The RegressionMV class supports multi-variate multiple linear regression. In this case, x is multi-dimensional [1, x_1, ... x_k] and y is multi-dimensional [y_0, ... y_l]. Fit the parameter vector b in for each regression equation y = b dot x + e = b_0 + b_1 * x_1 + ... b_k * x_k + e where e represents the residuals (the part not explained by the model). Use Least-Squares (minimizing the residuals) to solve the parameter vector b using the Normal Equations: x.t * x * b = x.t * y b = fac.solve (.) Five factorization algorithms are provided: Fac_QR QR Factorization: slower, more stable (default) Fac_SVD Singular Value Decomposition: slowest, most robust Fac_Cholesky Cholesky Factorization: faster, less stable (reasonable choice) Fac_LU' LU Factorization: better than InverseFac_Inverse` Inverse Factorization: textbook approach

The RegressionMV class supports multi-variate multiple linear regression. In this case, x is multi-dimensional [1, x_1, ... x_k] and y is multi-dimensional [y_0, ... y_l]. Fit the parameter vector b in for each regression equation y = b dot x + e = b_0 + b_1 * x_1 + ... b_k * x_k + e where e represents the residuals (the part not explained by the model). Use Least-Squares (minimizing the residuals) to solve the parameter vector b using the Normal Equations: x.t * x * b = x.t * y b = fac.solve (.) Five factorization algorithms are provided: Fac_QR QR Factorization: slower, more stable (default) Fac_SVD Singular Value Decomposition: slowest, most robust Fac_Cholesky Cholesky Factorization: faster, less stable (reasonable choice) Fac_LU' LU Factorization: better than InverseFac_Inverse` Inverse Factorization: textbook approach

Value parameters

fname_

the feature/variable names (defaults to null)

hparam

the hyper-parameters (defaults to Regression.hp)

x

the data/input m-by-n matrix (augment with a first column of ones to include intercept in model)

y

the response/output m-by-ny matrix

Attributes

See also

see.stanford.edu/materials/lsoeldsee263/05-ls.pdf Note, not intended for use when the number of degrees of freedom 'df' is negative.

Companion
object
Supertypes
trait Fit
trait FitM
trait PredictorMV
trait Model
class Object
trait Matchable
class Any
Show all
object RegressionMV

The RegressionMV companion object provides factory methods for creating multi-variate regression models.

The RegressionMV companion object provides factory methods for creating multi-variate regression models.

Attributes

Companion
class
Supertypes
class Object
trait Matchable
class Any
Self type
trait StoppingRule

The StoppingRule trait provides stopping rules to terminating the iterative steps in an optimization early.

The StoppingRule trait provides stopping rules to terminating the iterative steps in an optimization early.

Attributes

Supertypes
class Object
trait Matchable
class Any
Known subtypes
final class cNN_1DTest

Attributes

Supertypes
class Object
trait Matchable
class Any
final class cNN_1DTest2

Attributes

Supertypes
class Object
trait Matchable
class Any
final class cNN_1DTest3

Attributes

Supertypes
class Object
trait Matchable
class Any
final class coFilter_1DTest

Attributes

Supertypes
class Object
trait Matchable
class Any
final class coFilter_1DTest2

Attributes

Supertypes
class Object
trait Matchable
class Any
final class eLM_3L1Test

Attributes

Supertypes
class Object
trait Matchable
class Any
final class eLM_3L1Test2

Attributes

Supertypes
class Object
trait Matchable
class Any
final class eLM_3L1Test3

Attributes

Supertypes
class Object
trait Matchable
class Any
final class eLM_3L1Test4

Attributes

Supertypes
class Object
trait Matchable
class Any
final class eLM_3L1Test5

Attributes

Supertypes
class Object
trait Matchable
class Any
final class eLM_3L1Test6

Attributes

Supertypes
class Object
trait Matchable
class Any
final class example_ConcreteTest

Attributes

Supertypes
class Object
trait Matchable
class Any
final class example_ConcreteTest2

Attributes

Supertypes
class Object
trait Matchable
class Any
final class example_ConcreteTest3

Attributes

Supertypes
class Object
trait Matchable
class Any
final class neuralNet_2LTest

Attributes

Supertypes
class Object
trait Matchable
class Any
final class neuralNet_2LTest2

Attributes

Supertypes
class Object
trait Matchable
class Any
final class neuralNet_2LTest3

Attributes

Supertypes
class Object
trait Matchable
class Any
final class neuralNet_2LTest4

Attributes

Supertypes
class Object
trait Matchable
class Any
final class neuralNet_2LTest5

Attributes

Supertypes
class Object
trait Matchable
class Any
final class neuralNet_2LTest6

Attributes

Supertypes
class Object
trait Matchable
class Any
final class neuralNet_2LTest7

Attributes

Supertypes
class Object
trait Matchable
class Any
final class neuralNet_2LTest8

Attributes

Supertypes
class Object
trait Matchable
class Any
final class neuralNet_3LTest

Attributes

Supertypes
class Object
trait Matchable
class Any
final class neuralNet_3LTest10

Attributes

Supertypes
class Object
trait Matchable
class Any
final class neuralNet_3LTest11

Attributes

Supertypes
class Object
trait Matchable
class Any
final class neuralNet_3LTest2

Attributes

Supertypes
class Object
trait Matchable
class Any
final class neuralNet_3LTest3

Attributes

Supertypes
class Object
trait Matchable
class Any
final class neuralNet_3LTest4

Attributes

Supertypes
class Object
trait Matchable
class Any
final class neuralNet_3LTest5

Attributes

Supertypes
class Object
trait Matchable
class Any
final class neuralNet_3LTest6

Attributes

Supertypes
class Object
trait Matchable
class Any
final class neuralNet_3LTest7

Attributes

Supertypes
class Object
trait Matchable
class Any
final class neuralNet_3LTest8

Attributes

Supertypes
class Object
trait Matchable
class Any
final class neuralNet_3LTest9

Attributes

Supertypes
class Object
trait Matchable
class Any
final class neuralNet_XLTTest

Attributes

Supertypes
class Object
trait Matchable
class Any
final class neuralNet_XLTTest2

Attributes

Supertypes
class Object
trait Matchable
class Any
final class neuralNet_XLTest

Attributes

Supertypes
class Object
trait Matchable
class Any
final class neuralNet_XLTest2

Attributes

Supertypes
class Object
trait Matchable
class Any
final class neuralNet_XLTest3

Attributes

Supertypes
class Object
trait Matchable
class Any
final class neuralNet_XLTest4

Attributes

Supertypes
class Object
trait Matchable
class Any
final class neuralNet_XLTest5

Attributes

Supertypes
class Object
trait Matchable
class Any
final class neuralNet_XLTest6

Attributes

Supertypes
class Object
trait Matchable
class Any
final class neuralNet_XLTest7

Attributes

Supertypes
class Object
trait Matchable
class Any
final class predictorMVTest

Attributes

Supertypes
class Object
trait Matchable
class Any
final class regressionMVTest

Attributes

Supertypes
class Object
trait Matchable
class Any
final class regressionMVTest2

Attributes

Supertypes
class Object
trait Matchable
class Any
final class regressionMVTest3

Attributes

Supertypes
class Object
trait Matchable
class Any
final class regressionMVTest4

Attributes

Supertypes
class Object
trait Matchable
class Any
final class regressionMVTest5

Attributes

Supertypes
class Object
trait Matchable
class Any

Types

type NetParams = Array[NetParam]

Value members

Concrete methods

def cNN_1DTest(): Unit

The cNN_1DTest main function is used to test the CNN_1D class. Test using the simple example from section 11.10 of ScalaTion textbook. Perform four training steps.

The cNN_1DTest main function is used to test the CNN_1D class. Test using the simple example from section 11.10 of ScalaTion textbook. Perform four training steps.

runMain scalation.modeling.neuralnet.cNN_1DTest

Attributes

def cNN_1DTest2(): Unit

The cNN_1DTest2 main function is used to test the CNN_1D class using the AutoMPG dataset.

The cNN_1DTest2 main function is used to test the CNN_1D class using the AutoMPG dataset.

runMain scalation.modeling.neuralnet.cNN_1DTest2

Attributes

def cNN_1DTest3(): Unit

The cNN_1DTest3 main function is used to test the CNN_1D class for the convolutional operator.

The cNN_1DTest3 main function is used to test the CNN_1D class for the convolutional operator.

runMain scalation.modeling.neuralnet.cNN_1DTest3

Attributes

def coFilter_1DTest(): Unit

The coFilter_1DTest main function is used to test the CoFilter_1D class. Test using the simple example from CNN_1D section of the ScalaTion textbook.

The coFilter_1DTest main function is used to test the CoFilter_1D class. Test using the simple example from CNN_1D section of the ScalaTion textbook.

runMain scalation.modeling.neuralnet.coFilter_1DTest

Attributes

def coFilter_1DTest2(): Unit

The coFilter_1DTest2 main function is used to test the CoFilter_1D class's convolutional operator.

The coFilter_1DTest2 main function is used to test the CoFilter_1D class's convolutional operator.

runMain scalation.modeling.neuralnet.coFilter_1DTest2

Attributes

def eLM_3L1Test(): Unit

The eLM_3L1Test main function tests the multi-collinearity method in the ELM_3L1 class using the following regression equation on the Blood Pressure dataset. It also applies forward selection and backward elimination. y = b dot x = b_0 + b_1x_1 + b_2x_2 + b_3*x_3 + b_4 * x_4

The eLM_3L1Test main function tests the multi-collinearity method in the ELM_3L1 class using the following regression equation on the Blood Pressure dataset. It also applies forward selection and backward elimination. y = b dot x = b_0 + b_1x_1 + b_2x_2 + b_3*x_3 + b_4 * x_4

Attributes

See also

online.stat.psu.edu/online/development/stat501/12multicollinearity/05multico_vif.html

online.stat.psu.edu/online/development/stat501/data/bloodpress.txt

runMain scalation.modeling.neuralnet.eLM_3L1Test

def eLM_3L1Test2(): Unit

The eLM_3L1Test2 main function tests an extreme learning machine on the Example_BasketBall dataset, comparing its QoF with the QoF for Regression.

The eLM_3L1Test2 main function tests an extreme learning machine on the Example_BasketBall dataset, comparing its QoF with the QoF for Regression.

runMain scalation.modeling.neuralnet.eLM_3L1Test2

Attributes

def eLM_3L1Test3(): Unit

The eLM_3L1Test3 main function tests an extreme learning machine on the Example_AutoMPG dataset, comparing its QoF with the QoF for Regression.

The eLM_3L1Test3 main function tests an extreme learning machine on the Example_AutoMPG dataset, comparing its QoF with the QoF for Regression.

runMain scalation.modeling.neuralnet.eLM_3L1Test3

Attributes

def eLM_3L1Test4(): Unit

The eLM_3L1Test4 main function tests an extreme learning machine on the Example_AutoMPG dataset. It test cross-validation.

The eLM_3L1Test4 main function tests an extreme learning machine on the Example_AutoMPG dataset. It test cross-validation.

runMain scalation.modeling.neuralnet.eLM_3L1Test4

Attributes

def eLM_3L1Test5(): Unit

The eLM_3L1Test5 main function tests an extreme learning machine on the Example_AutoMPG dataset. It tests forward feature/variable selection.

The eLM_3L1Test5 main function tests an extreme learning machine on the Example_AutoMPG dataset. It tests forward feature/variable selection.

runMain scalation.modeling.neuralnet.eLM_3L1Test5

Attributes

def eLM_3L1Test6(): Unit

The eLM_3L1Test6 main function tests an extreme learning machine on the Example_AutoMPG dataset. It tests forward feature/variable selection with plotting of R^2.

The eLM_3L1Test6 main function tests an extreme learning machine on the Example_AutoMPG dataset. It tests forward feature/variable selection with plotting of R^2.

runMain scalation.modeling.neuralnet.eLM_3L1Test6

Attributes

def example_ConcreteTest(): Unit

The example_ConcreteTest main function tests the Example_Concrete object. These test cases compare several modeling techniques. This one runs Regression and RegressionMV.

The example_ConcreteTest main function tests the Example_Concrete object. These test cases compare several modeling techniques. This one runs Regression and RegressionMV.

runMain scalation.modeling.neuralnet.example_ConcreteTest

Attributes

def example_ConcreteTest2(): Unit

The example_ConcreteTest2 main function is used to test the ExampleConcrete object. These test cases compare several modeling techniques. This one runs Perceptron. with trainNtest (manual hyper-parameter tuning).

The example_ConcreteTest2 main function is used to test the ExampleConcrete object. These test cases compare several modeling techniques. This one runs Perceptron. with trainNtest (manual hyper-parameter tuning).

runMain scalation.modeling.neuralnet.example_ConcreteTest2

Attributes

def example_ConcreteTest3(): Unit

The example_ConcreteTest3 main function is used to test the ExampleConcrete object. These test cases compare several modeling techniques. This one runs NeuralNet_2L (that effectively is multiplke perceptrons) with trainNtest2 (partially automated hyper-parameter tuning). Compare the overall R^2 for these three test cases.

The example_ConcreteTest3 main function is used to test the ExampleConcrete object. These test cases compare several modeling techniques. This one runs NeuralNet_2L (that effectively is multiplke perceptrons) with trainNtest2 (partially automated hyper-parameter tuning). Compare the overall R^2 for these three test cases.

runMain scalation.modeling.neuralnet.example_ConcreteTest3

Attributes

def neuralNet_2LTest(): Unit

The neuralNet_2LTest main function is used to test the NeuralNet_2L class. Try changing the eta and bSize hyper-parameters, as well as the activation function.

The neuralNet_2LTest main function is used to test the NeuralNet_2L class. Try changing the eta and bSize hyper-parameters, as well as the activation function.

runMain scalation.modeling.neuralnet.neuralNet_2LTest

Attributes

def neuralNet_2LTest2(): Unit

The neuralNet_2LTest2 main function tests the NeuralNet_2L class using the Concrete dataset. It has three outputs/response variables. There are two ways to create the model: new NeuralNet_2L (ox, y, ox_fname) - depending on act. function user must rescale NeuralNet_2L.rescale (ox, y, ox_fname) - automatically rescales, assumes matrix response

The neuralNet_2LTest2 main function tests the NeuralNet_2L class using the Concrete dataset. It has three outputs/response variables. There are two ways to create the model: new NeuralNet_2L (ox, y, ox_fname) - depending on act. function user must rescale NeuralNet_2L.rescale (ox, y, ox_fname) - automatically rescales, assumes matrix response

runMain scalation.modeling.neuralnet.neuralNet_2LTest2

Attributes

def neuralNet_2LTest3(): Unit

The neuralNet_2LTest3 main function tests the NeuralNet_2L class using the AutoMPG dataset. There are three ways to create the model: new NeuralNet_2L (ox, yy, ox_fname) - depending on act. function user must rescale NeuralNet_2L.rescale (ox, yy, ox_fname) - automatically rescales, assumes matrix response NeuralNet_2L.perceptron (ox, y, ox_fname) - automatically rescales, assumes vector response

The neuralNet_2LTest3 main function tests the NeuralNet_2L class using the AutoMPG dataset. There are three ways to create the model: new NeuralNet_2L (ox, yy, ox_fname) - depending on act. function user must rescale NeuralNet_2L.rescale (ox, yy, ox_fname) - automatically rescales, assumes matrix response NeuralNet_2L.perceptron (ox, y, ox_fname) - automatically rescales, assumes vector response

runMain scalation.modeling.neuralnet.neuralNet_2LTest3

Attributes

def neuralNet_2LTest4(): Unit

The neuralNet_2LTest4 main function tests the NeuralNet_2L class using the AutoMPG dataset. It tests forward selection.

The neuralNet_2LTest4 main function tests the NeuralNet_2L class using the AutoMPG dataset. It tests forward selection.

runMain scalation.modeling.neuralnet.neuralNet_2LTest4

Attributes

def neuralNet_2LTest5(): Unit

The neuralNet_2LTest5 main function tests the NeuralNet_2L class using the AutoMPG dataset. It tests forward, backward and stepwise selection.

The neuralNet_2LTest5 main function tests the NeuralNet_2L class using the AutoMPG dataset. It tests forward, backward and stepwise selection.

runMain scalation.modeling.neuralnet.neuralNet_2LTest5

Attributes

def neuralNet_2LTest6(): Unit

The neuralNet_2LTest6 main function tests the NeuralNet_2L class using the AutoMPG dataset. It tries all activation functions.

The neuralNet_2LTest6 main function tests the NeuralNet_2L class using the AutoMPG dataset. It tries all activation functions.

runMain scalation.modeling.neuralnet.neuralNet_2LTest6

Attributes

def neuralNet_2LTest7(): Unit

The neuralNet_2LTest7 main function is used to test the NeuralNet_2L class. It tests a simple case that does not require a file to be read.

The neuralNet_2LTest7 main function is used to test the NeuralNet_2L class. It tests a simple case that does not require a file to be read.

Attributes

See also

translate.google.com/translate?hl=en&sl=zh-CN&u=https: //www.hrwhisper.me/machine-learning-decision-tree/&prev=search

runMain scalation.modeling.neuralnet.neuralNet_2LTest7

def neuralNet_2LTest8(): Unit

The neuralNet_2LTest8 main function is used to test the NeuralNet_2L class. It compares NeuralNet_2L.perceptron using sigmoid with TransRegression using logit.

The neuralNet_2LTest8 main function is used to test the NeuralNet_2L class. It compares NeuralNet_2L.perceptron using sigmoid with TransRegression using logit.

runMain scalation.modeling.neuralnet.neuralNet_2LTest8

Attributes

def neuralNet_3LTest(): Unit

The neuralNet_3LTest main function is used to test the NeuralNet_3L class. Try changing the eta and bSize hyper-parameters, as well as the activation function.

The neuralNet_3LTest main function is used to test the NeuralNet_3L class. Try changing the eta and bSize hyper-parameters, as well as the activation function.

runMain scalation.modeling.neuralnet.neuralNet_3LTest

Attributes

def neuralNet_3LTest10(): Unit

The neuralNet_3LTest10 main function is used to test the NeuralNet_3L class. It uses the matrix equations from section 10.7.5 on the example problem from section 10.7.8 and 10.7.11 exercises 1 and 2. Tests 9 instances.

The neuralNet_3LTest10 main function is used to test the NeuralNet_3L class. It uses the matrix equations from section 10.7.5 on the example problem from section 10.7.8 and 10.7.11 exercises 1 and 2. Tests 9 instances.

runMain scalation.modeling.neuralnet.neuralNet_3LTest10

Attributes

def neuralNet_3LTest11(): Unit

The neuralNet_3LTest11 main function is used to test the NeuralNet_3L class. Tests 9 instances for comparison with AutoDiff.

The neuralNet_3LTest11 main function is used to test the NeuralNet_3L class. Tests 9 instances for comparison with AutoDiff.

Attributes

See also

scalation.calculus.AutoDiff

runMain scalation.modeling.neuralnet.neuralNet_3LTest11

def neuralNet_3LTest2(): Unit

The neuralNet_3LTest2 main function tests the NeuralNet_3L class using the Concrete dataset. It has three outputs/response variables. There are two ways to create the model: new NeuralNet_3L (x, y, x_fname) - depending on act. function user must rescale NeuralNet_3L.rescale (x, y, x_fname) - automatically rescales, assumes matrix response

The neuralNet_3LTest2 main function tests the NeuralNet_3L class using the Concrete dataset. It has three outputs/response variables. There are two ways to create the model: new NeuralNet_3L (x, y, x_fname) - depending on act. function user must rescale NeuralNet_3L.rescale (x, y, x_fname) - automatically rescales, assumes matrix response

runMain scalation.modeling.neuralnet.neuralNet_3LTest2

Attributes

def neuralNet_3LTest3(): Unit

The neuralNet_3LTest3 main function tests the NeuralNet_3L class using the AutoMPG dataset. There are two ways to create the model: new NeuralNet_3L (x, yy, x_fname) - depending on act. function user must rescale NeuralNet_3L.rescale (x, yy, x_fname) - automatically rescales, assumes matrix response

The neuralNet_3LTest3 main function tests the NeuralNet_3L class using the AutoMPG dataset. There are two ways to create the model: new NeuralNet_3L (x, yy, x_fname) - depending on act. function user must rescale NeuralNet_3L.rescale (x, yy, x_fname) - automatically rescales, assumes matrix response

runMain scalation.modeling.neuralnet.neuralNet_3LTest3

Attributes

def neuralNet_3LTest4(): Unit

The neuralNet_3LTest4 main function tests the NeuralNet_3L class using the AutoMPG dataset. It tests forward selection.

The neuralNet_3LTest4 main function tests the NeuralNet_3L class using the AutoMPG dataset. It tests forward selection.

runMain scalation.modeling.neuralnet.neuralNet_3LTest4

Attributes

def neuralNet_3LTest5(): Unit

The neuralNet_3LTest5 main function tests the NeuralNet_3L class using the AutoMPG dataset. It tests forward, backward and stepwise selection.

The neuralNet_3LTest5 main function tests the NeuralNet_3L class using the AutoMPG dataset. It tests forward, backward and stepwise selection.

runMain scalation.modeling.neuralnet.neuralNet_3LTest5

Attributes

def neuralNet_3LTest6(): Unit

The neuralNet_3LTest6 main function tests the NeuralNet_3L class using the AutoMPG dataset. It tries all activation functions of the form (f, id), Ideally, eta should be initialized separately for each activation function.

The neuralNet_3LTest6 main function tests the NeuralNet_3L class using the AutoMPG dataset. It tries all activation functions of the form (f, id), Ideally, eta should be initialized separately for each activation function.

runMain scalation.modeling.neuralnet.neuralNet_3LTest6

Attributes

def neuralNet_3LTest7(): Unit

The neuralNet_3LTest7 main function tests the NeuralNet_3L class using the AutoMPG dataset. It uses the best combination of two features weight and modelyear.

The neuralNet_3LTest7 main function tests the NeuralNet_3L class using the AutoMPG dataset. It uses the best combination of two features weight and modelyear.

runMain scalation.modeling.neuralnet.neuralNet_3LTest7

Attributes

def neuralNet_3LTest8(): Unit

The neuralNet_3LTest8 main function is used to test the NeuralNet_3L class. It tests a simple case that does not require a file to be read.

The neuralNet_3LTest8 main function is used to test the NeuralNet_3L class. It tests a simple case that does not require a file to be read.

Attributes

See also

translate.google.com/translate?hl=en&sl=zh-CN&u=https: //www.hrwhisper.me/machine-learning-decision-tree/&prev=search

runMain scalation.modeling.neuralnet.neuralNet_3LTest8

def neuralNet_3LTest9(): Unit

The neuralNet_3LTest9 main function is used to test the NeuralNet_3L class. It uses the matrix equations from sectioon 10.7.5 on the example problem from section 10.7.8 and 10.7.11 exercises 1 and 2. Tests 1 instance.

The neuralNet_3LTest9 main function is used to test the NeuralNet_3L class. It uses the matrix equations from sectioon 10.7.5 on the example problem from section 10.7.8 and 10.7.11 exercises 1 and 2. Tests 1 instance.

runMain scalation.modeling.neuralnet.neuralNet_3LTest9

Attributes

def neuralNet_XLTTest(): Unit

The neuralNet_XLTTest object trains a neural network on the Example_AutoMPG dataset. This test case does not use transfer learning. A Neural Network with 2 hidden layers is created.

The neuralNet_XLTTest object trains a neural network on the Example_AutoMPG dataset. This test case does not use transfer learning. A Neural Network with 2 hidden layers is created.

runMain scalation.modeling.neuralnet.neuralNet_XLTTest

Attributes

def neuralNet_XLTTest2(): Unit

The neuralNet_XLTTest2 object trains a neural network on the Example_AutoMPG dataset. This test case uses transfer learning. A Neural Network with 2 hidden layers is created with the first hidden layer being transferred from a model trained on related data. FIX: find a related dataset for 'nn0'.

The neuralNet_XLTTest2 object trains a neural network on the Example_AutoMPG dataset. This test case uses transfer learning. A Neural Network with 2 hidden layers is created with the first hidden layer being transferred from a model trained on related data. FIX: find a related dataset for 'nn0'.

runMain scalation.modeling.neuralnet.neuralNet_XLTTest2

Attributes

def neuralNet_XLTest(): Unit

The neuralNet_XLTest main function is used to test the NeuralNet_XL class. Try changing the eta and bSize hyper-parameters, as well as the activation function.

The neuralNet_XLTest main function is used to test the NeuralNet_XL class. Try changing the eta and bSize hyper-parameters, as well as the activation function.

runMain scalation.modeling.neuralnet.neuralNet_XLTest

Attributes

def neuralNet_XLTest2(): Unit

The neuralNet_XLTest2 main function tests the NeuralNet_XL class using the Concrete dataset.

The neuralNet_XLTest2 main function tests the NeuralNet_XL class using the Concrete dataset.

runMain scalation.modeling.neuralnet.neuralNet_XLTest2

Attributes

def neuralNet_XLTest3(): Unit

The neuralNet_XLTest3 main function tests the NeuralNet_XL class using the AutoMPG dataset. There are two ways to create the model: new NeuralNet_XL (x, yy, x_fname) - depending on act. function user must rescale NeuralNet_XL.rescale (x, yy, x_fname) - automatically rescales, assumes matrix response

The neuralNet_XLTest3 main function tests the NeuralNet_XL class using the AutoMPG dataset. There are two ways to create the model: new NeuralNet_XL (x, yy, x_fname) - depending on act. function user must rescale NeuralNet_XL.rescale (x, yy, x_fname) - automatically rescales, assumes matrix response

runMain scalation.modeling.neuralnet.neuralNet_XLTest3

Attributes

def neuralNet_XLTest4(): Unit

The neuralNet_XLTest4 main function tests the NeuralNet_XL class using the AutoMPG dataset. It tests forward selection.

The neuralNet_XLTest4 main function tests the NeuralNet_XL class using the AutoMPG dataset. It tests forward selection.

runMain scalation.modeling.neuralnet.neuralNet_XLTest4

Attributes

def neuralNet_XLTest5(): Unit

The neuralNet_XLTest5 main function tests the NeuralNet_XL class using the AutoMPG dataset. It tests forward, backward and stepwise selection.

The neuralNet_XLTest5 main function tests the NeuralNet_XL class using the AutoMPG dataset. It tests forward, backward and stepwise selection.

runMain scalation.modeling.neuralnet.neuralNet_XLTest5

Attributes

def neuralNet_XLTest6(): Unit

The neuralNet_XLTest6 main function tests the NeuralNet_XL class using the AutoMPG dataset. It tries all activation functions combinations of form (f, g, id). Ideally, eta should be initialized separately for each activation function.

The neuralNet_XLTest6 main function tests the NeuralNet_XL class using the AutoMPG dataset. It tries all activation functions combinations of form (f, g, id). Ideally, eta should be initialized separately for each activation function.

runMain scalation.modeling.neuralnet.neuralNet_XLTest6

Attributes

def neuralNet_XLTest7(): Unit

The neuralNet_XLTest7 main function is used to test the NeuralNet_XL class. It tests a simple case that does not require a file to be read.

The neuralNet_XLTest7 main function is used to test the NeuralNet_XL class. It tests a simple case that does not require a file to be read.

Attributes

See also

translate.google.com/translate?hl=en&sl=zh-CN&u=https: //www.hrwhisper.me/machine-learning-decision-tree/&prev=search

runMain scalation.modeling.neuralnet.neuralNet_XLTest7

def predictorMVTest(): Unit

The predictorMVTest main function is used to test the PredictorMV trait and its derived classes using the Example_Concrete dataset containing data matrices x, ox and response matrix y.

The predictorMVTest main function is used to test the PredictorMV trait and its derived classes using the Example_Concrete dataset containing data matrices x, ox and response matrix y.

runMain scalation.modeling.predictorMVTest

Attributes

def regressionMVTest(): Unit

The regressionMVTest main function is used to test the RegressionMV class.

The regressionMVTest main function is used to test the RegressionMV class.

runMain scalation.modeling.neuralnet.regressionMVTest

Attributes

def regressionMVTest2(): Unit

The regressionMVTest2 main function tests the RegressionMV class using the Concrete dataset.

The regressionMVTest2 main function tests the RegressionMV class using the Concrete dataset.

runMain scalation.modeling.neuralnet.regressionMVTest2

Attributes

def regressionMVTest3(): Unit

The regressionMVTest3 main function tests the RegressionMV class using the AutoMPG dataset.

The regressionMVTest3 main function tests the RegressionMV class using the AutoMPG dataset.

runMain scalation.modeling.neuralnet.regressionMVTest3

Attributes

def regressionMVTest4(): Unit

The regressionMVTest4 main function tests the RegressionMV class using the AutoMPG dataset. It tests forward selection.

The regressionMVTest4 main function tests the RegressionMV class using the AutoMPG dataset. It tests forward selection.

runMain scalation.modeling.neuralnet.regressionMVTest4

Attributes

def regressionMVTest5(): Unit

The regressionMVTest5 main function tests the RegressionMV class using the AutoMPG dataset. It tests forward, backward and stepwise selection.

The regressionMVTest5 main function tests the RegressionMV class using the AutoMPG dataset. It tests forward, backward and stepwise selection.

runMain scalation.modeling.neuralnet.regressionMVTest5

Attributes

Extensions

Extensions

extension (b: MatrixD | NetParam)
def *(x: MatrixD): MatrixD

Extension method that allows dot products and multipilication to be handled for either case.

Extension method that allows dot products and multipilication to be handled for either case.

Value parameters

b

the parameters (weights and biases)

Attributes

infix def dot(x: VectorD): VectorD

Extension method that allows dot products and multipilication to be handled for either case.

Extension method that allows dot products and multipilication to be handled for either case.

Value parameters

b

the parameters (weights and biases)

Attributes