![[Pasted image 20240614114045.png|215]] # Markov Chains [steady state analysis and limiting distributions](https://en.wikipedia.org/wiki/Markov_chain#Steady-state_analysis_and_limiting_distributions) https://setosa.io/ev/markov-chains/ [aaa](https://setosa.io/ev/markov-chains/) [a](http:aaa) ## The Markov Property When a process is independently and identically distributed (iid) - Events are not correlated to eachother - Current event has no predictive power of future events > [!summary] **Definition:** The Markov Property > > $P(q_i | q_1, q_2, ... q_n) = P(q_i | q_{i-1})$ > > A first-order markov model is represented by a graph with a set of vertices The **Markov Property** asserts that *the probability of transitioning to the next state depends solely on the current state* and not the history of previous states. We can say that the current state depends on the previous $k$ states. - When $k=1$, it is a first-order Markov model - When $k=2$, it is a second-order Markov model (higher-order). $\Pr(x)=\Pr(x_1)$ ## Formal Notation > See also: > - [[Graph Theory]] > - [Hidden Markov Models and their Applications in Biological Sequence Analysis](https://www.zotero.org/micheeela/collections/IHMKCNUW/items/MIHLQM3C/attachment/WBWX4LYJ/reader) ### Initial State Probability The initial state probability, also called the starting matrix, is defined as The initial states can be set to be a $\pi_0=[0,1,0]$ A set of initial probabilities must be given to the model for its initial run. The exact states to start at can also be set rather than ### The Transition Probabilities The probabilities of transitioning between states within the markov process is defined through the **transition probability** matrix. $t=\begin{bmatrix} 1 & 2 & 3\\ a & b & c \end{bmatrix}$ #### n-th Step Transitions $P_{ij}(n)=A^n_{ij}$ The probability of reaching state $j$ from state $i$ after $n$ steps can be determined by repeatedly multiplying the transition probability matrix times itself $n$ number of times. ## Stationary Distributions > See also: > - [[Linear Algebra]] **Stationary distribution**, or *equilibrium state*, of a markov process The stationary distribution of a Markov Chain with transition matrix $P$ is some vector $\psi$, such that $\psi P = \psi$. In other words, in the long run, no matter what the starting state was, the proportion of the time the chain spends in state $j$ is approximately $\psi_j$ for all $j$. It’s possible to arrive at the stationary distribution by simply multiplying the transition matrix by There are several requirements in addition to the Markov property that must be met before being able to calculate a stationary distribution: 1. Fully Reduced A Markov chain meeting these requirements is known as being **ergodic**.