![[Pi_monte_carlo_all.gif|300]]
# Sampling Techniques
https://ermongroup.github.io/cs228-notes/inference/sampling/
> [!question] Why Do We Need Sampling Techniques?
> Humans are really bad at picking things truly randomly, and most programming languages rely on *pseudo-random number generators (PRNGs)* that approximate the properties of random sequences.
>
> Basically, the simple methods we use to generate “random” numbers just don’t hold up in more complicated situations.
Statistical inference is the method of making decisions about the parameters of a population, based on random sampling.
There is a lot of overlap with the field of [[machine learning]].
The **law of large numbers (LLN)** …
## Properties of Sampling Techniques
### Sample Replacement
- [ ] Sampling with replacement
- [ ] Sampling without replacement
### Dependence vs Independence
We generally can’t independently sample from
## Types of Sampling Techniques
### Forward Sampling
## Random Sampling
## Monte Carlo Simulations
Rely on repeated random *sampling*
Instead of needing
Posterior: The joint probability distribution of some parameters of interest, $\theta$, conditioned upon some data, $D$, and a model/hypothesis, $M$.
- “joint a-posteriori probability distribution”