Packages

class Markov extends Error

The Markov class supports the creation and use of Discrete-Time Markov Chains 'DTMC's. Transient solution: compute the next state 'pp = p * tr' where 'p' is the current state probability vector and 'tr' is the transition probability matrix. Equilibrium solution (steady-state): solve for 'p' in 'p = p * tr'.

Linear Supertypes
Error, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. Markov
  2. Error
  3. AnyRef
  4. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new Markov(tr: MatriD)

    tr

    the transition probability matrix

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. def animate(): Unit

    Animate this Markov Chain.

    Animate this Markov Chain. Place the nodes around a circle and connect them if there is a such a transition.

  5. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  6. def clone(): AnyRef
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  7. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  8. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  9. def finalize(): Unit
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  10. final def flaw(method: String, message: String): Unit
    Definition Classes
    Error
  11. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
  12. def hashCode(): Int
    Definition Classes
    AnyRef → Any
  13. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  14. def isStochastic: Boolean

    Check whether the transition matrix is stochastic.

  15. def limit: VectoD

    Compute the limiting probabilistic state 'p * tr^k' as 'k -> infinity', by solving a left eigenvalue problem: 'p = p * tr' => 'p * (tr - I) = 0', where the eigenvalue is 1. Solve for p by computing the left nullspace of the 'tr - I' matrix (appropriately sliced) and then normalize 'p' so '||p|| = 1'.

  16. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  17. def next(p: VectoD, k: Int = 1): VectoD

    Compute the 'k'th next probabilistic state 'p * tr^k'.

    Compute the 'k'th next probabilistic state 'p * tr^k'.

    p

    the current state probability vector

    k

    compute for the 'k'th step/epoch

  18. final def notify(): Unit
    Definition Classes
    AnyRef
  19. final def notifyAll(): Unit
    Definition Classes
    AnyRef
  20. def simulate(i0: Int, endTime: Int): Unit

    Simulate the discrete-time Markov chain, by starting in state 'i0' and after the state's holding, making a transition to the next state according to the jump matrix.

    Simulate the discrete-time Markov chain, by starting in state 'i0' and after the state's holding, making a transition to the next state according to the jump matrix.

    i0

    the initial/start state

    endTime

    the end time for the simulation

  21. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  22. def toString(): String

    Convert 'this' discrete-time Markov Chain to a string.

    Convert 'this' discrete-time Markov Chain to a string.

    Definition Classes
    Markov → AnyRef → Any
  23. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  24. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  25. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from Error

Inherited from AnyRef

Inherited from Any

Ungrouped