Summary
Distribution
Uniform Distribution (continuous)
Notation | \(Unif([a,b])\) | Here \(a,b\) are parametersWhere \(-\infty < a < b < +\infty\) |
\(\left\{\begin{matrix} \frac{1}{b-a}&; \text{for all x}\in [a,b] \\ 0& \text{otherwise} \end{matrix}\right. \) | ||
Mean | \(\frac{a+b}{2}\) | |
Median | \(\frac{a+b}{2}\) | |
Varience | \(\frac{(b-a)^2}{12}\) |
Bernoulli Distribution
Notation | \(Ber(p)\) | \(0<p<1\) |
\(p^X(1-p)^{1-X}\) | Here \(X\) is the random variable | |
Mean | \(p\) | |
Varience | \(p(1-p)\) |
Binomial Distribution
Notation | \(Bin(n,p)\) | \(n\in\{0,1,2,...\}\) ; number of trials\(p\in(0,1)\) ; it's probability of success for each trial |
PMF | \(\begin{pmatrix} n\\ k \end{pmatrix} p^k(1-p)^{n-k}\) | \(k\) is number of successes\(k\in\{0,1,...,n\}\) |
Mean | \(np\) | |
Varience | \(np(1-p)\) |
Geometric Distribution
Notation | \(Geo(p)\) | \(p\) is the success probability\(0 < p < 1\) |
PMF | \((1-p)^{k-1}p\) | \(k\) is number of failures\(k\in\{0,1,2,3,...\}\) |
Mean | \(\frac{1}{p}\) | |
Varience | \(\frac{1-p}{p^2}\) |
Beta Distribution
Notation | \(Beta(\alpha,\beta)\) | \(\alpha >0\ and\ \beta > 0\) |
\(C\times x^{(\alpha-1)}(1-x)^{(\beta-1)}\mathbb{1}(x\in[0,1]) \) | \(C\) is a constant | |
Mean | \(\frac{\alpha}{\alpha+\beta}\) | |
Varience | \(\frac{\alpha\beta}{(\alpha+\beta)^2(\alpha+\beta+1)}\) |
Gaussian Distribution
Notation | \(\mathcal{N}(\mu,\sigma^2)\) | Here \(\mu,\sigma^2\) are parametersWhere \(-\infty < \mu < +\infty\) and \(\sigma^2 \gt 0\) |
\[f(x)=\frac{1}{\sigma \sqrt{2\pi }} \exp \left(-\frac{(x-\mu )^2}{2 \sigma ^2}\right)\] | \(-\infty \lt x \lt \infty\) | |
Mean | \(\mu\) | |
Varience | \(\sigma^2\) |
Exponential Distribution
Notation | \(Exp(\lambda)\) | Here \(\lambda\) is parametersWhere \(\lambda \gt 0\) |
\[\lambda e^{-\lambda t}\] | \(t \gt 0\) | |
Mean | \(\frac{1}{\lambda}\) | |
Varience | \(\frac{1}{\lambda^2}\) |
Visit Chapter
Law of large numbers(LLN)
Say we have\(n\)
observations. \(X,X_1,X_2,X_3,....,X_n\)
be I.I.D. random varibles, and \(\mathbb{E}[X]=\mu\)
Then: \[\overline{X}_n:=\frac{1}{n}\sum _{i=1}^ n X_ i \xrightarrow [n\to \infty ]{\mathbb{P},\text{ a.s.}} \mu\]
Visit Chapter
Central Limit Theorem(CLT)
Say we have\(n\)
observations \(X,X_1,X_2,X_3,....,X_n\)
be \(I.I.D.\)
random varibles, \(\mathbb{E}[X]=\mu\)
and \(Var(X)=\sigma^2\)
\[\sqrt{n} \frac{\overline{X}_n-\mu }{\sigma } \xrightarrow [n\to \infty ]{(d)} \mathcal{N}(0,1) \]
equivalently: \[\sqrt{n} (\overline{X}_n-\mu ) \xrightarrow [n\to \infty ]{(d)} \mathcal{N}(0,\sigma^2) \]
And the quantity \(\sigma^2\)
is called asymptotic variance of \(\overline{X}_n\)
In progress ...