Construct a 'DecisionTreeID3 object, passing 'x' and 'y' together in one table.
Construct a 'DecisionTreeID3 object, passing 'x' and 'y' together in one table.
the data vectors along with their classifications stored as rows of a matrix
the names for all features/variables
the number of classes
the names for all classes
the value count array indicating number of distinct values per feature
the data vectors stored as rows of a matrix
the class array, where y_i = class for row i of the matrix x
the names for all features/variables
the number of classes
the names for all classes
the value count array indicating number of distinct values per feature
Extend the tree given a path e.
Extend the tree given a path e.g. ((outlook, sunny), ...).
an existing path in the tree ((feature, value), ...)
Given a data vector z, classify it returning the class number (0, .
Given a data vector z, classify it returning the class number (0, ..., k-1) by following a decision path from the root to a leaf.
the data vector to classify
Given a new continuous data vector 'z', determine which class it belongs to, by first rounding it to an integer-valued vector.
Given a new continuous data vector 'z', determine which class it belongs to, by first rounding it to an integer-valued vector.
the vector to classify
Extract column from matrix, filtering out values rows that are not on path.
Extract column from matrix, filtering out values rows that are not on path.
the feature to consider (e.g., 2 (Humidity))
Show the flaw by printing the error message.
Show the flaw by printing the error message.
the method where the error occurred
the error message
Given a feature column (e.
Given a feature column (e.g., 2 (Humidity)) and a value (e.g., 1 (High)) use the frequency of ocurrence of the value for each classification (e.g., 0 (no), 1 (yes)) to estimate k probabilities. Also, determine the fraction of training cases where the feature has this value (e.g., fraction where Humidity is High = 7/14).
the list of data set tuples to consider (e.g. value, row index)
one of the possible values for this feature (e.g., 1 (High))
Compute the information gain due to using the values of a feature/attribute to distinguish the training cases (e.
Compute the information gain due to using the values of a feature/attribute to distinguish the training cases (e.g., how well does Humidity with its values Normal and High indicate whether one will play tennis).
the feature to consider (e.g., 2 (Humidity))
the number of data vectors in training-set (# rows)
the number of data vectors in training-set (# rows)
the training-set size as a Double
the training-set size as a Double
Find the most frequent classification.
Find the most frequent classification.
array of discrete classifications
the number of features/variables (# columns)
the number of features/variables (# columns)
the feature-set size as a Double
the feature-set size as a Double
Test the quality of the training with a test-set and return the fraction of correct classifications.
Test the quality of the training with a test-set and return the fraction of correct classifications.
the integer-valued test vectors stored as rows of a matrix
the test classification vector, where yy_i = class for row i of xx
Train the decsion tree.
Train the decsion tree.
Return default values for binary input data (value count (vc) set to 2).
Return default values for binary input data (value count (vc) set to 2).
The
DecisionTreeID3
class implements a Decision Tree classifier using the ID3 algorithm. The classifier is trained using a data matrix 'x' and a classification vector 'y'. Each data vector in the matrix is classified into one of 'k' classes numbered '0, ..., k-1'. Each column in the matrix represents a feature (e.g., Humidity). The 'vc' array gives the number of distinct values per feature (e.g., 2 for Humidity).