Probability Mass Function (PMF):
\[P(X = x) = p(x)\]Cumulative Distribution Function (CDF):
\[F(x) = P(X \leq x) = \sum_{t \leq x} p(t)\]Expected Value (Mean):
\[E(X) = \mu = \sum_{\text{all } x} x \cdot p(x)\]Variance:
\[\text{Var}(X) = \sigma^2 = E[(X - \mu)^2] = \sum_{\text{all } x} (x - \mu)^2 \cdot p(x)\]Alternative formula:
\[\sigma^2 = E(X^2) - [E(X)]^2 = \sum_{\text{all } x} x^2 \cdot p(x) - \mu^2\]Probability Mass Function:
\[P(X = x) = \binom{n}{x} p^x (1-p)^{n-x} = \frac{n!}{x!(n-x)!} p^x (1-p)^{n-x}\]Conditions for Binomial Distribution:
Mean (Expected Value):
\[\mu = E(X) = np\]Variance:
\[\sigma^2 = np(1-p) = npq\]Standard Deviation:
\[\sigma = \sqrt{np(1-p)} = \sqrt{npq}\]Probability Mass Function:
\[P(X = x) = \frac{\lambda^x e^{-\lambda}}{x!}\]Conditions for Poisson Distribution:
Mean (Expected Value):
\[\mu = E(X) = \lambda\]Variance:
\[\sigma^2 = \lambda\]Standard Deviation:
\[\sigma = \sqrt{\lambda}\]Poisson Approximation to Binomial:
When n is large and p is small (typically n ≥ 20 and p ≤ 0.05, or np ≤ 5), use:
Probability Mass Function (Number of Trials Until First Success):
\[P(X = x) = (1-p)^{x-1} p = q^{x-1}p\]Mean (Expected Value):
\[\mu = E(X) = \frac{1}{p}\]Variance:
\[\sigma^2 = \frac{1-p}{p^2} = \frac{q}{p^2}\]Standard Deviation:
\[\sigma = \frac{\sqrt{1-p}}{p} = \frac{\sqrt{q}}{p}\]Probability Mass Function:
\[P(X = x) = \frac{\binom{K}{x} \binom{N-K}{n-x}}{\binom{N}{n}}\]Mean (Expected Value):
\[\mu = E(X) = n \cdot \frac{K}{N}\]Variance:
\[\sigma^2 = n \cdot \frac{K}{N} \cdot \frac{N-K}{N} \cdot \frac{N-n}{N-1}\]Probability Density Function (PDF):
\[P(a \leq X \leq b) = \int_a^b f(x) \, dx\]Cumulative Distribution Function (CDF):
\[F(x) = P(X \leq x) = \int_{-\infty}^x f(t) \, dt\]Expected Value (Mean):
\[E(X) = \mu = \int_{-\infty}^{\infty} x \cdot f(x) \, dx\]Variance:
\[\text{Var}(X) = \sigma^2 = \int_{-\infty}^{\infty} (x - \mu)^2 \cdot f(x) \, dx\]Alternative formula:
\[\sigma^2 = E(X^2) - [E(X)]^2 = \int_{-\infty}^{\infty} x^2 \cdot f(x) \, dx - \mu^2\]Probability Density Function:
\[f(x) = \begin{cases} \frac{1}{b-a} & \text{for } a \leq x \leq b \\ 0 & \text{otherwise} \end{cases}\]Cumulative Distribution Function:
\[F(x) = \begin{cases} 0 & \text{for } x < a="" \\="" \frac{x-a}{b-a}="" &="" \text{for="" }="" a="" \leq="" x="" \leq="" b="" \\="" 1="" &="" \text{for="" }="" x=""> b \end{cases}\]Mean (Expected Value):
\[\mu = E(X) = \frac{a + b}{2}\]Variance:
\[\sigma^2 = \frac{(b-a)^2}{12}\]Standard Deviation:
\[\sigma = \frac{b-a}{\sqrt{12}} = \frac{b-a}{2\sqrt{3}}\]Probability Density Function:
\[f(x) = \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{(x-\mu)^2}{2\sigma^2}}\]Properties of Normal Distribution:
Empirical Rule (68-95-99.7 Rule):
Standard Normal Variable:
\[Z = \frac{X - \mu}{\sigma}\]Probability Density Function:
\[f(z) = \frac{1}{\sqrt{2\pi}} e^{-\frac{z^2}{2}}\]Cumulative Distribution Function:
\[F(z) = \Phi(z) = P(Z \leq z) = \int_{-\infty}^z \frac{1}{\sqrt{2\pi}} e^{-\frac{t^2}{2}} \, dt\]Symmetry Property:
\[\Phi(-z) = 1 - \Phi(z)\] \[P(Z > z) = 1 - \Phi(z)\]Common Z-values:
Probability Calculation for Normal Distribution:
\[P(a \leq X \leq b) = \Phi\left(\frac{b-\mu}{\sigma}\right) - \Phi\left(\frac{a-\mu}{\sigma}\right)\]Probability Density Function:
\[f(x) = \lambda e^{-\lambda x} \quad \text{for } x \geq 0\]Cumulative Distribution Function:
\[F(x) = P(X \leq x) = 1 - e^{-\lambda x} \quad \text{for } x \geq 0\]Survival Function (Reliability Function):
\[P(X > x) = e^{-\lambda x}\]Mean (Expected Value):
\[\mu = E(X) = \frac{1}{\lambda}\]Variance:
\[\sigma^2 = \frac{1}{\lambda^2}\]Standard Deviation:
\[\sigma = \frac{1}{\lambda}\]Memoryless Property:
\[P(X > s + t \mid X > s) = P(X > t)\]Median:
\[\text{Median} = \frac{\ln(2)}{\lambda} \approx \frac{0.693}{\lambda}\]Definition:
If \(\ln(X)\) is normally distributed with mean μ and variance σ², then X follows a log-normal distribution.
Probability Density Function:
\[f(x) = \frac{1}{x\sigma\sqrt{2\pi}} e^{-\frac{(\ln x - \mu)^2}{2\sigma^2}} \quad \text{for } x > 0\]Mean (Expected Value):
\[E(X) = e^{\mu + \frac{\sigma^2}{2}}\]Variance:
\[\text{Var}(X) = e^{2\mu + \sigma^2}(e^{\sigma^2} - 1)\]Median:
\[\text{Median} = e^{\mu}\]Mode:
\[\text{Mode} = e^{\mu - \sigma^2}\]Conversion to Standard Normal:
\[Z = \frac{\ln(X) - \mu}{\sigma}\]Probability Density Function:
\[f(x) = \frac{\beta}{\alpha}\left(\frac{x}{\alpha}\right)^{\beta-1} e^{-(x/\alpha)^\beta} \quad \text{for } x \geq 0\]Cumulative Distribution Function:
\[F(x) = 1 - e^{-(x/\alpha)^\beta} \quad \text{for } x \geq 0\]Reliability Function:
\[R(x) = P(X > x) = e^{-(x/\alpha)^\beta}\]Hazard Function:
\[h(x) = \frac{\beta}{\alpha}\left(\frac{x}{\alpha}\right)^{\beta-1}\]Mean (Expected Value):
\[\mu = E(X) = \alpha \Gamma\left(1 + \frac{1}{\beta}\right)\]Variance:
\[\sigma^2 = \alpha^2 \left[\Gamma\left(1 + \frac{2}{\beta}\right) - \left[\Gamma\left(1 + \frac{1}{\beta}\right)\right]^2\right]\]Special Cases:
Conditions:
When n is large and p is not close to 0 or 1, binomial distribution can be approximated by normal distribution.
Approximation Parameters:
\[\mu = np\] \[\sigma = \sqrt{np(1-p)}\]Continuity Correction:
For P(X = k), use:
For P(X ≤ k), use:
\[P(X \leq k + 0.5)\]For P(X ≥ k), use:
\[P(X \geq k - 0.5)\]For P(X < k),="">
\[P(X \leq k - 0.5)\]For P(X > k), use:
\[P(X \geq k + 0.5)\]Conditions:
When λ is large (typically λ ≥ 10 or λ ≥ 20), Poisson can be approximated by normal.
Approximation Parameters:
\[\mu = \lambda\] \[\sigma = \sqrt{\lambda}\]Continuity Correction:
Apply similar continuity corrections as binomial approximation.
Linear Transformation:
\[E(aX + b) = aE(X) + b\]Sum of Random Variables:
\[E(X + Y) = E(X) + E(Y)\]Linear Combination:
\[E(aX + bY) = aE(X) + bE(Y)\]Product of Independent Variables:
\[E(XY) = E(X) \cdot E(Y) \quad \text{if X and Y are independent}\]Linear Transformation:
\[\text{Var}(aX + b) = a^2 \text{Var}(X)\]Sum of Independent Random Variables:
\[\text{Var}(X + Y) = \text{Var}(X) + \text{Var}(Y) \quad \text{if X and Y are independent}\]Difference of Independent Random Variables:
\[\text{Var}(X - Y) = \text{Var}(X) + \text{Var}(Y) \quad \text{if X and Y are independent}\]Linear Combination of Independent Variables:
\[\text{Var}(aX + bY) = a^2 \text{Var}(X) + b^2 \text{Var}(Y) \quad \text{if X and Y are independent}\]General Linear Combination:
\[\text{Var}\left(\sum_{i=1}^n a_i X_i\right) = \sum_{i=1}^n a_i^2 \text{Var}(X_i) \quad \text{if } X_i \text{ are independent}\]If X ~ N(μ1, σ1²) and Y ~ N(μ2, σ2²) are independent:
\[X + Y \sim N(\mu_1 + \mu_2, \sigma_1^2 + \sigma_2^2)\]Linear Combination of Normal Variables:
\[aX + bY \sim N(a\mu_1 + b\mu_2, a^2\sigma_1^2 + b^2\sigma_2^2)\]Statement:
For a sequence of independent and identically distributed random variables X1, X2, ..., Xn with mean μ and variance σ², the sample mean approaches a normal distribution as n increases.
Sample Mean:
\[\bar{X} = \frac{1}{n}\sum_{i=1}^n X_i\]Distribution of Sample Mean:
\[E(\bar{X}) = \mu\] \[\text{Var}(\bar{X}) = \frac{\sigma^2}{n}\] \[\text{Standard Error} = \sigma_{\bar{X}} = \frac{\sigma}{\sqrt{n}}\]Standardized Form:
\[Z = \frac{\bar{X} - \mu}{\sigma/\sqrt{n}} \sim N(0, 1) \quad \text{as } n \to \infty\]Sample Sum:
\[S = \sum_{i=1}^n X_i\] \[E(S) = n\mu\] \[\text{Var}(S) = n\sigma^2\] \[\sigma_S = \sqrt{n}\sigma\]Standardized Sum:
\[Z = \frac{S - n\mu}{\sqrt{n}\sigma} = \frac{\sum X_i - n\mu}{\sqrt{n}\sigma} \sim N(0, 1) \quad \text{as } n \to \infty\]Rule of Thumb:
Sample Proportion:
\[\hat{p} = \frac{X}{n}\]Mean of Sample Proportion:
\[E(\hat{p}) = p\]Variance of Sample Proportion:
\[\text{Var}(\hat{p}) = \frac{p(1-p)}{n}\]Standard Error of Sample Proportion:
\[\sigma_{\hat{p}} = \sqrt{\frac{p(1-p)}{n}}\]Approximate Normal Distribution (when np ≥ 5 and n(1-p) ≥ 5):
\[Z = \frac{\hat{p} - p}{\sqrt{\frac{p(1-p)}{n}}} \sim N(0, 1)\]For independent samples from two populations:
\[\bar{X}_1 - \bar{X}_2\]Mean of Difference:
\[E(\bar{X}_1 - \bar{X}_2) = \mu_1 - \mu_2\]Variance of Difference (independent samples):
\[\text{Var}(\bar{X}_1 - \bar{X}_2) = \frac{\sigma_1^2}{n_1} + \frac{\sigma_2^2}{n_2}\]Standard Error:
\[\sigma_{\bar{X}_1 - \bar{X}_2} = \sqrt{\frac{\sigma_1^2}{n_1} + \frac{\sigma_2^2}{n_2}}\]Standardized Form (for normal populations or large samples):
\[Z = \frac{(\bar{X}_1 - \bar{X}_2) - (\mu_1 - \mu_2)}{\sqrt{\frac{\sigma_1^2}{n_1} + \frac{\sigma_2^2}{n_2}}} \sim N(0, 1)\]For independent samples:
\[\hat{p}_1 - \hat{p}_2\]Mean of Difference:
\[E(\hat{p}_1 - \hat{p}_2) = p_1 - p_2\]Variance of Difference:
\[\text{Var}(\hat{p}_1 - \hat{p}_2) = \frac{p_1(1-p_1)}{n_1} + \frac{p_2(1-p_2)}{n_2}\]Standard Error:
\[\sigma_{\hat{p}_1 - \hat{p}_2} = \sqrt{\frac{p_1(1-p_1)}{n_1} + \frac{p_2(1-p_2)}{n_2}}\]Standardized Form (when sample sizes satisfy conditions):
\[Z = \frac{(\hat{p}_1 - \hat{p}_2) - (p_1 - p_2)}{\sqrt{\frac{p_1(1-p_1)}{n_1} + \frac{p_2(1-p_2)}{n_2}}} \sim N(0, 1)\]Moment Generating Function (MGF):
\[M_X(t) = E(e^{tX})\]For Discrete Distribution:
\[M_X(t) = \sum_{\text{all } x} e^{tx} p(x)\]For Continuous Distribution:
\[M_X(t) = \int_{-\infty}^{\infty} e^{tx} f(x) \, dx\]n-th Moment:
\[E(X^n) = M_X^{(n)}(0) = \left.\frac{d^n M_X(t)}{dt^n}\right|_{t=0}\]First Moment (Mean):
\[E(X) = M_X'(0)\]Second Moment:
\[E(X^2) = M_X''(0)\]Variance from MGF:
\[\text{Var}(X) = M_X''(0) - [M_X'(0)]^2\]MGF of Linear Transformation:
\[M_{aX+b}(t) = e^{bt} M_X(at)\]MGF of Sum of Independent Variables:
\[M_{X+Y}(t) = M_X(t) \cdot M_Y(t) \quad \text{if X and Y are independent}\]Binomial Distribution:
\[M_X(t) = [(1-p) + pe^t]^n\]Poisson Distribution:
\[M_X(t) = e^{\lambda(e^t - 1)}\]Normal Distribution:
\[M_X(t) = e^{\mu t + \frac{\sigma^2 t^2}{2}}\]Exponential Distribution:
\[M_X(t) = \frac{\lambda}{\lambda - t} \quad \text{for } t < \lambda\]="">Uniform Distribution U(a,b):
\[M_X(t) = \frac{e^{tb} - e^{ta}}{t(b-a)} \quad \text{for } t \neq 0\]p-th Percentile (Quantile):
\[P(X \leq x_p) = p\]Median:
\[P(X \leq \text{median}) = 0.5\]Quartiles:
Interquartile Range (IQR):
\[\text{IQR} = Q_3 - Q_1\]Coefficient of Skewness:
\[\gamma_1 = E\left[\left(\frac{X-\mu}{\sigma}\right)^3\right] = \frac{E[(X-\mu)^3]}{\sigma^3}\]Coefficient of Kurtosis:
\[\gamma_2 = E\left[\left(\frac{X-\mu}{\sigma}\right)^4\right] - 3 = \frac{E[(X-\mu)^4]}{\sigma^4} - 3\]Coefficient of Variation (CV):
\[\text{CV} = \frac{\sigma}{\mu} \times 100\%\]Reliability Function:
\[R(t) = P(X > t) = 1 - F(t)\]Hazard Rate (Failure Rate):
\[h(t) = \frac{f(t)}{R(t)} = \frac{f(t)}{1-F(t)}\]Cumulative Hazard Function:
\[H(t) = \int_0^t h(u) \, du = -\ln[R(t)]\]Relationship Between Reliability and Hazard:
\[R(t) = e^{-H(t)} = e^{-\int_0^t h(u) \, du}\]Mean Time to Failure (MTTF):
\[\text{MTTF} = E(X) = \int_0^{\infty} R(t) \, dt\]Exponential Distribution (Constant Failure Rate):
\[h(t) = \lambda \quad \text{(constant)}\] \[R(t) = e^{-\lambda t}\] \[\text{MTTF} = \frac{1}{\lambda}\]Weibull Distribution (Variable Failure Rate):
\[h(t) = \frac{\beta}{\alpha}\left(\frac{t}{\alpha}\right)^{\beta-1}\]Conditional Expected Value:
\[E(X \mid Y = y) = \sum_{\text{all } x} x \cdot P(X = x \mid Y = y) \quad \text{(discrete)}\] \[E(X \mid Y = y) = \int_{-\infty}^{\infty} x \cdot f_{X|Y}(x|y) \, dx \quad \text{(continuous)}\]Law of Total Expectation:
\[E(X) = E[E(X \mid Y)]\]Conditional Variance:
\[\text{Var}(X \mid Y = y) = E(X^2 \mid Y = y) - [E(X \mid Y = y)]^2\]Law of Total Variance:
\[\text{Var}(X) = E[\text{Var}(X \mid Y)] + \text{Var}[E(X \mid Y)]\]