Probability

scalation.mathstat.Probability
object Probability

The Probability object provides methods for operating on univariate and bivariate probability distributions of discrete random variables X and Y. A probability distribution is specified by its probability mass functions (pmf) stored either as a "probability vector" for a univariate distribution or a "probability matrix" for a bivariate distribution. joint probability matrix: pxy(i, j) = P(X = x_i, Y = y_j) marginal probability vector: px(i) = P(X = x_i) conditional probability matrix: px_y(i, j) = P(X = x_i|Y = y_j) In addition to computing joint, marginal and conditional probabilities, methods for computing entropy and mutual information are also provided. Entropy provides a measure of disorder or randomness. If there is little randomness, entropy will close to 0, while when randomness is high, entropy will be close to, e.g., log2 (px.dim). Mutual information provides a robust measure of dependency between random variables (contrast with correlation).

Attributes

Graph
Supertypes
class Object
trait Matchable
class Any
Self type

Members list

Value members

Concrete methods

def centropy(px: VectorD, qx: VectorD, base_e: Boolean): Double

Given probability vectors px and qx, compute the "cross entropy". May also pass in response vectors: y (actual) and yp (predicted).

Given probability vectors px and qx, compute the "cross entropy". May also pass in response vectors: y (actual) and yp (predicted).

Value parameters

base_e

whether to use base e or base 2 logarithms (defaults to e)

px

the first probability vector

qx

the second probability vector (requires qx.dim >= px.dim)

Attributes

def condProbX_Y(pxy: MatrixD, py_: VectorD): MatrixD

Given a joint probability matrix pxy, compute the "conditional probability" for random variable X given random variable Y, i.e, P(X = x_i|Y = y_j).

Given a joint probability matrix pxy, compute the "conditional probability" for random variable X given random variable Y, i.e, P(X = x_i|Y = y_j).

Value parameters

pxy

the joint probability matrix

py_

the marginal probability vector for Y

Attributes

def condProbY_X(pxy: MatrixD, px_: VectorD): MatrixD

Given a joint probability matrix pxy, compute the "conditional probability" for random variable Y given random variable X, i.e, P(Y = y_j|X = x_i).

Given a joint probability matrix pxy, compute the "conditional probability" for random variable Y given random variable X, i.e, P(Y = y_j|X = x_i).

Value parameters

px_

the marginal probability vector for X

pxy

the joint probability matrix

Attributes

def count(x: VectorD, vl: Int, cont: Boolean, thres: Double): Int

Count the total number of occurrence in vector x of value vl, e.g., x is column 2 (Humidity), vl is 1 (High) matches 7 rows. This method works for vectors with integer or continuous values.

Count the total number of occurrence in vector x of value vl, e.g., x is column 2 (Humidity), vl is 1 (High) matches 7 rows. This method works for vectors with integer or continuous values.

Value parameters

cont

whether feature/variable x is to be treated as continuous

k

the maximum value of y + 1

thres

the splitting threshold for features/variables treated as continuous

vl

one of the possible branch values for feature x (e.g., 1 (High))

x

the feature/column vector (e.g., column j of matrix)

y

the response/classification vector

Attributes

def entropy(px: VectorD): Double

Given a probability vector px, compute the "entropy" of random variable X.

Given a probability vector px, compute the "entropy" of random variable X.

Value parameters

px

the probability vector

Attributes

See also
def entropy(nu: VectorI): Double

Given a frequency vector nu, compute the "entropy" of random variable X.

Given a frequency vector nu, compute the "entropy" of random variable X.

Value parameters

nu

the frequency vector

Attributes

See also
def entropy(px: VectorD, b: Int): Double

Given a probability vector px, compute the " base-k entropy" of random variable X.

Given a probability vector px, compute the " base-k entropy" of random variable X.

Value parameters

b

the base for the logarithm

px

the probability vector

Attributes

See also
def entropy(pxy: MatrixD): Double

Given a joint probability matrix pxy, compute the "joint entropy" of random variables X and Y.

Given a joint probability matrix pxy, compute the "joint entropy" of random variables X and Y.

Value parameters

pxy

the joint probability matrix

Attributes

def entropy(pxy: MatrixD, px_y: MatrixD): Double

Given a joint probability matrix pxy and a conditional probability matrix py_x, compute the "conditional entropy" of random variable X given random variable Y.

Given a joint probability matrix pxy and a conditional probability matrix py_x, compute the "conditional entropy" of random variable X given random variable Y.

Value parameters

px_y

the conditional probability matrix

pxy

the joint probability matrix

Attributes

def freq(x: VectorI, vc: Int, y: VectorI, k: Int): MatrixD

Compute the Joint Frequency Table (JFT) for vector x and vector y. Count the number of cases where x(i) = v and y(i) = c.

Compute the Joint Frequency Table (JFT) for vector x and vector y. Count the number of cases where x(i) = v and y(i) = c.

Value parameters

k

the maximum value of y + 1 (number of classes)

vc

the number of disctinct values in vector x (value count)

x

the variable/feature vector

y

the response/classification vector

Attributes

def freq(x: VectorI, y: VectorI, k: Int, vl: Int): (Double, VectorI)

Count the frequency of occurrence in vector x of value vl for each of y's classification values, e.g., x is column 2 (Humidity), vl is 1 (High)) and y can be 0 (no) or 1 (yes). Also, determine the fraction of training cases where the feature has this value (e.g., fraction where Humidity is High = 7/14).

Count the frequency of occurrence in vector x of value vl for each of y's classification values, e.g., x is column 2 (Humidity), vl is 1 (High)) and y can be 0 (no) or 1 (yes). Also, determine the fraction of training cases where the feature has this value (e.g., fraction where Humidity is High = 7/14).

Value parameters

k

the maximum value of y + 1

vl

one of the possible branch values for feature x (e.g., 1 (High))

x

the feature/column vector (e.g., column j of matrix)

y

the response/classification vector

Attributes

def freq(x: VectorI, y: VectorI, k: Int, vl: Int, idx_: VectorI): (Double, VectorI)

Count the frequency of occurrence in vector x of value vl for each of y's classification values, e.g., x is column 2 (Humidity), vl is 1 (High)) and y can be 0 (no) or 1 (yes). Also, determine the fraction of training cases where the feature has this value (e.g., fraction where Humidity is High = 7/14).

Count the frequency of occurrence in vector x of value vl for each of y's classification values, e.g., x is column 2 (Humidity), vl is 1 (High)) and y can be 0 (no) or 1 (yes). Also, determine the fraction of training cases where the feature has this value (e.g., fraction where Humidity is High = 7/14).

Value parameters

idx_

the index positions within x (if null, use all index positions)

k

the maximum value of y + 1

vl

one of the possible branch values for feature x (e.g., 1 (High))

x

the feature/column vector (e.g., column j of matrix)

y

the response/classification vector

Attributes

def freq(x: VectorD, y: VectorI, k: Int, vl: Int, idx_: VectorI, cont: Boolean, thres: Double): (Double, VectorI)

Count the frequency of occurrence in vector x of value vl for each of y's classification values, e.g., x is column 2 (Humidity), vl is 1 (High)) and y can be 0 (no) or 1 (yes). Also, determine the fraction of training cases where the feature has this value (e.g., fraction where Humidity is High = 7/14). This method works for vectors with integer or continuous values.

Count the frequency of occurrence in vector x of value vl for each of y's classification values, e.g., x is column 2 (Humidity), vl is 1 (High)) and y can be 0 (no) or 1 (yes). Also, determine the fraction of training cases where the feature has this value (e.g., fraction where Humidity is High = 7/14). This method works for vectors with integer or continuous values.

Value parameters

cont

whether feature/variable x is to be treated as continuous

idx_

the index positions within x (if null, use all index positions)

k

the maximum value of y + 1

thres

the splitting threshold for features/variables treated as continuous

vl

one of the possible branch values for feature x (e.g., 1 (High))

x

the feature/column vector (e.g., column j of matrix)

y

the response/classification vector

Attributes

def isProbability(px: VectorD): Boolean

Determine whether the vector px is a legitimate "probability vector". The elements of the vector must be non-negative and add to one.

Determine whether the vector px is a legitimate "probability vector". The elements of the vector must be non-negative and add to one.

Value parameters

px

the probability vector

Attributes

def isProbability(pxy: MatrixD): Boolean

Determine whether the matrix pxy is a legitimate joint "probability matrix". The elements of the matrix must be non-negative and add to one.

Determine whether the matrix pxy is a legitimate joint "probability matrix". The elements of the matrix must be non-negative and add to one.

Value parameters

pxy

the probability matrix

Attributes

def jProbXY(px: VectorD, py: VectorD): MatrixD

Given two independent random variables X and Y, compute their "joint probability", which is the outer product of their probability vectors px and py, i.e., P(X = x_i, Y = y_j).

Given two independent random variables X and Y, compute their "joint probability", which is the outer product of their probability vectors px and py, i.e., P(X = x_i, Y = y_j).

Attributes

def jProbXY(x: VectorI, vc: Int, y: VectorI, k: Int): MatrixD

Compute the Joint Probability Table (JPT) for vector x and vector y. Count the number of cases where x(i) = v and y(i) = c and divide by the number of instances/datapoints.

Compute the Joint Probability Table (JPT) for vector x and vector y. Count the number of cases where x(i) = v and y(i) = c and divide by the number of instances/datapoints.

Value parameters

k

the maximum value of y + 1 (number of classes)

vc

the number of disctinct values in vector x (value count)

x

the variable/feature vector

y

the response/classification vector

Attributes

Given a joint probability matrix pxy, compute the "marginal probability" for random variable X, i.e, P(X = x_i).

Given a joint probability matrix pxy, compute the "marginal probability" for random variable X, i.e, P(X = x_i).

Value parameters

pxy

the probability matrix

Attributes

Given a joint probability matrix pxy, compute the "marginal probability" for random variable Y, i.e, P(Y = y_j).

Given a joint probability matrix pxy, compute the "marginal probability" for random variable Y, i.e, P(Y = y_j).

Value parameters

pxy

the probability matrix

Attributes

def muInfo(pxy: MatrixD, px: VectorD, py: VectorD): Double

Given a joint probability matrix pxy, compute the mutual information for random variables X and Y.

Given a joint probability matrix pxy, compute the mutual information for random variables X and Y.

Value parameters

px

the marginal probability vector for X

pxy

the probability matrix

py

the marginal probability vector for Y

Attributes

def muInfo(pxy: MatrixD): Double

Given a joint probability matrix pxy, compute the mutual information for random variables X and Y.

Given a joint probability matrix pxy, compute the mutual information for random variables X and Y.

Value parameters

pxy

the probability matrix

Attributes

def nentropy(px: VectorD): Double

Given a probability vector px, compute the "normalized entropy" of random variable X.

Given a probability vector px, compute the "normalized entropy" of random variable X.

Value parameters

px

the probability vector

Attributes

See also
inline def plog(p: Double): Double

Given a probability p, compute the "positive log-probability". Requires the probability to be non-zero.

Given a probability p, compute the "positive log-probability". Requires the probability to be non-zero.

Value parameters

p

the given probability

Attributes

def plog(px: VectorD): VectorD

Given a probability vector px, compute the "positive log-probability". Requires each probability to be non-zero.

Given a probability vector px, compute the "positive log-probability". Requires each probability to be non-zero.

Value parameters

px

the probability vector

Attributes

def probY(y: VectorI, k: Int): VectorD

Return the probability of discrete random variable y taking on any of k values

Return the probability of discrete random variable y taking on any of k values

Value parameters

k

the maximum value of y + 1, e.g., { 0, 1, 2} => k = 3

y

the feature/column vector of integer values whose frequency counts are sought

Attributes

def rentropy(px: VectorD, qx: VectorD): Double

Given probability vectors px and qx, compute the "relative entropy".

Given probability vectors px and qx, compute the "relative entropy".

Value parameters

px

the first probability vector

qx

the second probability vector (requires qx.dim >= px.dim)

Attributes

Given a frequency vector, convert it to a probability vector.

Given a frequency vector, convert it to a probability vector.

Value parameters

nu

the frequency vector

Attributes

def toProbability(nu: VectorI, n: Int): VectorD

Given a frequency vector, convert it to a probability vector.

Given a frequency vector, convert it to a probability vector.

Value parameters

n

the total number of instances/trials collected

nu

the frequency vector

Attributes

Given a frequency matrix, convert it to a probability matrix.

Given a frequency matrix, convert it to a probability matrix.

Value parameters

nu

the frequency matrix

Attributes

def toProbability(nu: MatrixD, n: Int): MatrixD

Given a frequency matrix, convert it to a probability matrix.

Given a frequency matrix, convert it to a probability matrix.

Value parameters

n

the total number of instances/trials collected

nu

the frequency matrix

Attributes