The MarkoveChain class supports the creation and use of Discrete-Time Markov Chains (DTMC)s. Transient solution: compute the next state p = π * a where π is the current state probability vector and a is the transition probability matrix. Equilibrium solution (steady-state): solve for π in π = π * a.
Compute the limiting probabilistic state π * a^k as k -> infinity, by solving a left eigenvalue problem: π = π * a => π * (a - I) = 0, where the eigenvalue is 1. Solve for π by computing the left nullspace of the a - I matrix and then normalize π so it adds to 1.
Compute the limiting probabilistic state π * a^k as k -> infinity, by solving a left eigenvalue problem: π = π * a => π * (a - I) = 0, where the eigenvalue is 1. Solve for π by computing the left nullspace of the a - I matrix and then normalize π so it adds to 1.
Simulate the discrete-time Markov chain, by starting in state i0 and after the state's holding, making a transition to the next state according to the jump matrix.
Simulate the discrete-time Markov chain, by starting in state i0 and after the state's holding, making a transition to the next state according to the jump matrix.