Properties of Markov Chains

Stationary Probability Mass Function

A state distribution vector π\pi such that π=πP\pi=\pi \textbf{P}. Where π\pi must be an eigen-vector of P\textbf{P}. When Pn=1π\textbf{P}^n=\textbf{1}\pi after n transitions then we say that the matrix has reached equilibrium aka steady state.

Properties of States

A state 'i' is accessible from another state 'j' if there is some non-zero-likelihood transition path from 'j' to 'i'. States which are accessible to each other are communicative and sets of states that all communicate form a class.

An irreducible Markov chain is a Markov chain composed of only one class.

Properties of Classes

If a class has no non-zero outgoing transition edges then it is recurrent, in other words once the current state enters a recurrent class it can never leave that class. If a class has at least one non-zero outgoing transition edge then it is transient. Given enough time the current state will leave the transient state and never return.

In an irreducible Markov chain all states are recurrent.

There is one transient class and one recurrent class in this Markov Chain

The period of a class is defined as gcd(nϵZ+:pii(n)>0)gcd(n \epsilon Z_{+}:p_{ii}(n)>0). In other words it is the greatest common denominator of all the possible number of steps to leave and return to the same state. A class with a period of 1 is considered aperiodic.

This class has a period of 4 (all states have a period of 4)

Ergodicity

A Markov chain is ergodic if every state can be reached from every other state. Equivalently, a Markov chain is ergodic if it is irreducible and aperiodic. In such cases the stationary vector π\pi maps the approximate time spent in each state.

n-step Transition Probabilities

Ifp(0)\textbf{p}(0)is the initial state distribution vector thenp(n)\textbf{p}(n)is the probability of being in a particular state after n steps. Successive distributions can then be calculated:p(n+1)=Pp(n)\textbf{p}(n+1)=\textbf{Pp}(n). If we want to calculate a state probability distribution directly then we can use the formulap(n)=Pnp(0)\textbf{p}(n)=\textbf{P}^{n}\textbf{p}(0)where Pn\textbf{P}^{n}can be easily calculated through Cayley-Hamilton and Matrix Similarity.

Last updated

Was this helpful?