![[Approximation_d'une_distribution_normale.gif]]
[ermongroup.github.io/cs228-notes/inference/sampling/](https://ermongroup.github.io/cs228-notes/inference/sampling/)
# Monte Carlo Simulations
Rely on repeated random *sampling*
## Markov Chain Monte Carlo (MCMC)
Similar to [[Hidden Markov Models (HMMs)]] in the sense that it will perform a *random walk* between the states of the markov chain.
Instead of trying to optimize the emission probabilities (likelihood of observations) to identify a single probable outcome, MCMC records the random walks that have occured during the simulation run and uses their resulting **probability density** to identify a variable’s *expected value* and *variance*.
- [ ] https://en.wikipedia.org/wiki/Autocorrelation
The main advantage is that it can escape local minima and has been proven to be favorable over finding a single maximum likelihood model both in terms of accuracy and stability.