PE Exam Exam  >  PE Exam Notes  >  Engineering Fundamentals Revision for PE  >  Formula Sheet: Probability Distributions

Formula Sheet: Probability Distributions

Discrete Probability Distributions

General Discrete Probability

Probability Mass Function (PMF):

\[P(X = x) = p(x)\]
  • X = discrete random variable
  • x = specific value of the random variable
  • p(x) = probability that X equals x
  • Requirement: \(0 \leq p(x) \leq 1\) and \(\sum_{\text{all } x} p(x) = 1\)

Cumulative Distribution Function (CDF):

\[F(x) = P(X \leq x) = \sum_{t \leq x} p(t)\]
  • F(x) = probability that X is less than or equal to x
  • Non-decreasing function
  • \(0 \leq F(x) \leq 1\)

Expected Value (Mean):

\[E(X) = \mu = \sum_{\text{all } x} x \cdot p(x)\]
  • μ = mean or expected value
  • E(X) = expected value of X

Variance:

\[\text{Var}(X) = \sigma^2 = E[(X - \mu)^2] = \sum_{\text{all } x} (x - \mu)^2 \cdot p(x)\]

Alternative formula:

\[\sigma^2 = E(X^2) - [E(X)]^2 = \sum_{\text{all } x} x^2 \cdot p(x) - \mu^2\]
  • σ² = variance
  • σ = standard deviation = \(\sqrt{\sigma^2}\)

Binomial Distribution

Probability Mass Function:

\[P(X = x) = \binom{n}{x} p^x (1-p)^{n-x} = \frac{n!}{x!(n-x)!} p^x (1-p)^{n-x}\]
  • n = number of independent trials
  • x = number of successes (x = 0, 1, 2, ..., n)
  • p = probability of success on a single trial
  • (1 - p) = q = probability of failure on a single trial
  • Notation: X ~ B(n, p) or X ~ Binomial(n, p)

Conditions for Binomial Distribution:

  • Fixed number of trials (n)
  • Each trial is independent
  • Only two outcomes per trial (success or failure)
  • Probability of success (p) is constant for each trial

Mean (Expected Value):

\[\mu = E(X) = np\]

Variance:

\[\sigma^2 = np(1-p) = npq\]

Standard Deviation:

\[\sigma = \sqrt{np(1-p)} = \sqrt{npq}\]

Poisson Distribution

Probability Mass Function:

\[P(X = x) = \frac{\lambda^x e^{-\lambda}}{x!}\]
  • λ = average number of occurrences in the interval (λ > 0)
  • x = number of occurrences (x = 0, 1, 2, 3, ...)
  • e = Euler's number ≈ 2.71828
  • Notation: X ~ Poisson(λ)

Conditions for Poisson Distribution:

  • Events occur independently
  • Average rate (λ) is constant
  • Events occur one at a time
  • Used for rare events over time or space

Mean (Expected Value):

\[\mu = E(X) = \lambda\]

Variance:

\[\sigma^2 = \lambda\]

Standard Deviation:

\[\sigma = \sqrt{\lambda}\]

Poisson Approximation to Binomial:
When n is large and p is small (typically n ≥ 20 and p ≤ 0.05, or np ≤ 5), use:

\[\lambda = np\]

Geometric Distribution

Probability Mass Function (Number of Trials Until First Success):

\[P(X = x) = (1-p)^{x-1} p = q^{x-1}p\]
  • x = trial number on which first success occurs (x = 1, 2, 3, ...)
  • p = probability of success on each trial
  • q = (1 - p) = probability of failure

Mean (Expected Value):

\[\mu = E(X) = \frac{1}{p}\]

Variance:

\[\sigma^2 = \frac{1-p}{p^2} = \frac{q}{p^2}\]

Standard Deviation:

\[\sigma = \frac{\sqrt{1-p}}{p} = \frac{\sqrt{q}}{p}\]

Hypergeometric Distribution

Probability Mass Function:

\[P(X = x) = \frac{\binom{K}{x} \binom{N-K}{n-x}}{\binom{N}{n}}\]
  • N = population size
  • K = number of success states in population
  • n = number of draws (sample size)
  • x = number of observed successes
  • Used for sampling without replacement
  • x must satisfy: max(0, n - (N - K)) ≤ x ≤ min(n, K)

Mean (Expected Value):

\[\mu = E(X) = n \cdot \frac{K}{N}\]

Variance:

\[\sigma^2 = n \cdot \frac{K}{N} \cdot \frac{N-K}{N} \cdot \frac{N-n}{N-1}\]
  • The factor \(\frac{N-n}{N-1}\) is the finite population correction factor

Continuous Probability Distributions

General Continuous Probability

Probability Density Function (PDF):

\[P(a \leq X \leq b) = \int_a^b f(x) \, dx\]
  • f(x) = probability density function
  • Requirement: \(f(x) \geq 0\) for all x
  • \(\int_{-\infty}^{\infty} f(x) \, dx = 1\)
  • Note: P(X = a) = 0 for any specific value a

Cumulative Distribution Function (CDF):

\[F(x) = P(X \leq x) = \int_{-\infty}^x f(t) \, dt\]
  • F(x) = cumulative distribution function
  • \(f(x) = \frac{dF(x)}{dx}\)

Expected Value (Mean):

\[E(X) = \mu = \int_{-\infty}^{\infty} x \cdot f(x) \, dx\]

Variance:

\[\text{Var}(X) = \sigma^2 = \int_{-\infty}^{\infty} (x - \mu)^2 \cdot f(x) \, dx\]

Alternative formula:

\[\sigma^2 = E(X^2) - [E(X)]^2 = \int_{-\infty}^{\infty} x^2 \cdot f(x) \, dx - \mu^2\]

Uniform Distribution (Continuous)

Probability Density Function:

\[f(x) = \begin{cases} \frac{1}{b-a} & \text{for } a \leq x \leq b \\ 0 & \text{otherwise} \end{cases}\]
  • a = minimum value
  • b = maximum value
  • Notation: X ~ U(a, b) or X ~ Uniform(a, b)

Cumulative Distribution Function:

\[F(x) = \begin{cases} 0 & \text{for } x < a="" \\="" \frac{x-a}{b-a}="" &="" \text{for="" }="" a="" \leq="" x="" \leq="" b="" \\="" 1="" &="" \text{for="" }="" x=""> b \end{cases}\]

Mean (Expected Value):

\[\mu = E(X) = \frac{a + b}{2}\]

Variance:

\[\sigma^2 = \frac{(b-a)^2}{12}\]

Standard Deviation:

\[\sigma = \frac{b-a}{\sqrt{12}} = \frac{b-a}{2\sqrt{3}}\]

Normal Distribution (Gaussian Distribution)

Probability Density Function:

\[f(x) = \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{(x-\mu)^2}{2\sigma^2}}\]
  • μ = mean (location parameter)
  • σ = standard deviation (scale parameter), σ > 0
  • σ² = variance
  • Domain: -∞ < x=""><>
  • Notation: X ~ N(μ, σ²)

Properties of Normal Distribution:

  • Symmetric about the mean μ
  • Bell-shaped curve
  • Mean = Median = Mode = μ
  • Inflection points at μ ± σ
  • Area under curve = 1

Empirical Rule (68-95-99.7 Rule):

  • P(μ - σ ≤ X ≤ μ + σ) ≈ 0.6827 (68.27%)
  • P(μ - 2σ ≤ X ≤ μ + 2σ) ≈ 0.9545 (95.45%)
  • P(μ - 3σ ≤ X ≤ μ + 3σ) ≈ 0.9973 (99.73%)

Standard Normal Distribution

Standard Normal Variable:

\[Z = \frac{X - \mu}{\sigma}\]
  • Z = standard normal variable
  • X = normal random variable with mean μ and standard deviation σ
  • Z ~ N(0, 1) (mean = 0, variance = 1)

Probability Density Function:

\[f(z) = \frac{1}{\sqrt{2\pi}} e^{-\frac{z^2}{2}}\]

Cumulative Distribution Function:

\[F(z) = \Phi(z) = P(Z \leq z) = \int_{-\infty}^z \frac{1}{\sqrt{2\pi}} e^{-\frac{t^2}{2}} \, dt\]
  • Φ(z) = standard normal cumulative distribution function
  • Values obtained from standard normal table (Z-table)

Symmetry Property:

\[\Phi(-z) = 1 - \Phi(z)\] \[P(Z > z) = 1 - \Phi(z)\]

Common Z-values:

  • Φ(1.645) = 0.95 → 90% confidence (one-tail)
  • Φ(1.96) = 0.975 → 95% confidence (two-tail)
  • Φ(2.33) = 0.99 → 98% confidence (one-tail)
  • Φ(2.576) = 0.995 → 99% confidence (two-tail)
  • Φ(3.09) = 0.999 → 99.8% confidence (two-tail)

Probability Calculation for Normal Distribution:

\[P(a \leq X \leq b) = \Phi\left(\frac{b-\mu}{\sigma}\right) - \Phi\left(\frac{a-\mu}{\sigma}\right)\]

Exponential Distribution

Probability Density Function:

\[f(x) = \lambda e^{-\lambda x} \quad \text{for } x \geq 0\]
  • λ = rate parameter (λ > 0)
  • x = time until event occurs (x ≥ 0)
  • Used to model time between events in a Poisson process
  • Notation: X ~ Exp(λ)

Cumulative Distribution Function:

\[F(x) = P(X \leq x) = 1 - e^{-\lambda x} \quad \text{for } x \geq 0\]

Survival Function (Reliability Function):

\[P(X > x) = e^{-\lambda x}\]

Mean (Expected Value):

\[\mu = E(X) = \frac{1}{\lambda}\]

Variance:

\[\sigma^2 = \frac{1}{\lambda^2}\]

Standard Deviation:

\[\sigma = \frac{1}{\lambda}\]

Memoryless Property:

\[P(X > s + t \mid X > s) = P(X > t)\]
  • Future probability independent of past history

Median:

\[\text{Median} = \frac{\ln(2)}{\lambda} \approx \frac{0.693}{\lambda}\]

Log-Normal Distribution

Definition:
If \(\ln(X)\) is normally distributed with mean μ and variance σ², then X follows a log-normal distribution.

Probability Density Function:

\[f(x) = \frac{1}{x\sigma\sqrt{2\pi}} e^{-\frac{(\ln x - \mu)^2}{2\sigma^2}} \quad \text{for } x > 0\]
  • μ = mean of ln(X)
  • σ = standard deviation of ln(X)
  • x > 0 (always positive)

Mean (Expected Value):

\[E(X) = e^{\mu + \frac{\sigma^2}{2}}\]

Variance:

\[\text{Var}(X) = e^{2\mu + \sigma^2}(e^{\sigma^2} - 1)\]

Median:

\[\text{Median} = e^{\mu}\]

Mode:

\[\text{Mode} = e^{\mu - \sigma^2}\]

Conversion to Standard Normal:

\[Z = \frac{\ln(X) - \mu}{\sigma}\]
  • Z follows standard normal distribution

Weibull Distribution

Probability Density Function:

\[f(x) = \frac{\beta}{\alpha}\left(\frac{x}{\alpha}\right)^{\beta-1} e^{-(x/\alpha)^\beta} \quad \text{for } x \geq 0\]
  • α = scale parameter (α > 0)
  • β = shape parameter (β > 0)
  • Used extensively in reliability and failure analysis

Cumulative Distribution Function:

\[F(x) = 1 - e^{-(x/\alpha)^\beta} \quad \text{for } x \geq 0\]

Reliability Function:

\[R(x) = P(X > x) = e^{-(x/\alpha)^\beta}\]

Hazard Function:

\[h(x) = \frac{\beta}{\alpha}\left(\frac{x}{\alpha}\right)^{\beta-1}\]

Mean (Expected Value):

\[\mu = E(X) = \alpha \Gamma\left(1 + \frac{1}{\beta}\right)\]
  • Γ = gamma function

Variance:

\[\sigma^2 = \alpha^2 \left[\Gamma\left(1 + \frac{2}{\beta}\right) - \left[\Gamma\left(1 + \frac{1}{\beta}\right)\right]^2\right]\]

Special Cases:

  • β = 1: Exponential distribution with λ = 1/α
  • β = 2: Rayleigh distribution
  • β < 1:="" decreasing="" failure="" rate="" (infant="">
  • β = 1: Constant failure rate (random failures)
  • β > 1: Increasing failure rate (wear-out failures)

Special Distribution Relationships

Normal Approximation to Binomial

Conditions:
When n is large and p is not close to 0 or 1, binomial distribution can be approximated by normal distribution.

  • Rule of thumb: np ≥ 5 and n(1-p) ≥ 5
  • Alternative rule: np > 10 and n(1-p) > 10

Approximation Parameters:

\[\mu = np\] \[\sigma = \sqrt{np(1-p)}\]

Continuity Correction:
For P(X = k), use:

\[P(k - 0.5 \leq X \leq k + 0.5)\]

For P(X ≤ k), use:

\[P(X \leq k + 0.5)\]

For P(X ≥ k), use:

\[P(X \geq k - 0.5)\]

For P(X < k),="">

\[P(X \leq k - 0.5)\]

For P(X > k), use:

\[P(X \geq k + 0.5)\]

Normal Approximation to Poisson

Conditions:
When λ is large (typically λ ≥ 10 or λ ≥ 20), Poisson can be approximated by normal.

Approximation Parameters:

\[\mu = \lambda\] \[\sigma = \sqrt{\lambda}\]

Continuity Correction:
Apply similar continuity corrections as binomial approximation.

Linear Combinations of Random Variables

Expected Value Properties

Linear Transformation:

\[E(aX + b) = aE(X) + b\]
  • a, b = constants

Sum of Random Variables:

\[E(X + Y) = E(X) + E(Y)\]
  • Always true, regardless of independence

Linear Combination:

\[E(aX + bY) = aE(X) + bE(Y)\]

Product of Independent Variables:

\[E(XY) = E(X) \cdot E(Y) \quad \text{if X and Y are independent}\]

Variance Properties

Linear Transformation:

\[\text{Var}(aX + b) = a^2 \text{Var}(X)\]
  • Adding a constant does not change variance
  • Multiplying by constant scales variance by the square

Sum of Independent Random Variables:

\[\text{Var}(X + Y) = \text{Var}(X) + \text{Var}(Y) \quad \text{if X and Y are independent}\]

Difference of Independent Random Variables:

\[\text{Var}(X - Y) = \text{Var}(X) + \text{Var}(Y) \quad \text{if X and Y are independent}\]

Linear Combination of Independent Variables:

\[\text{Var}(aX + bY) = a^2 \text{Var}(X) + b^2 \text{Var}(Y) \quad \text{if X and Y are independent}\]

General Linear Combination:

\[\text{Var}\left(\sum_{i=1}^n a_i X_i\right) = \sum_{i=1}^n a_i^2 \text{Var}(X_i) \quad \text{if } X_i \text{ are independent}\]

Sum of Normal Random Variables

If X ~ N(μ1, σ1²) and Y ~ N(μ2, σ2²) are independent:

\[X + Y \sim N(\mu_1 + \mu_2, \sigma_1^2 + \sigma_2^2)\]

Linear Combination of Normal Variables:

\[aX + bY \sim N(a\mu_1 + b\mu_2, a^2\sigma_1^2 + b^2\sigma_2^2)\]

Central Limit Theorem

Central Limit Theorem (CLT)

Statement:
For a sequence of independent and identically distributed random variables X1, X2, ..., Xn with mean μ and variance σ², the sample mean approaches a normal distribution as n increases.

Sample Mean:

\[\bar{X} = \frac{1}{n}\sum_{i=1}^n X_i\]

Distribution of Sample Mean:

\[E(\bar{X}) = \mu\] \[\text{Var}(\bar{X}) = \frac{\sigma^2}{n}\] \[\text{Standard Error} = \sigma_{\bar{X}} = \frac{\sigma}{\sqrt{n}}\]

Standardized Form:

\[Z = \frac{\bar{X} - \mu}{\sigma/\sqrt{n}} \sim N(0, 1) \quad \text{as } n \to \infty\]

Sample Sum:

\[S = \sum_{i=1}^n X_i\] \[E(S) = n\mu\] \[\text{Var}(S) = n\sigma^2\] \[\sigma_S = \sqrt{n}\sigma\]

Standardized Sum:

\[Z = \frac{S - n\mu}{\sqrt{n}\sigma} = \frac{\sum X_i - n\mu}{\sqrt{n}\sigma} \sim N(0, 1) \quad \text{as } n \to \infty\]

Rule of Thumb:

  • n ≥ 30 is generally sufficient for CLT to apply
  • For highly skewed distributions, larger n may be needed
  • For nearly normal distributions, smaller n is acceptable

Sampling Distributions

Distribution of Sample Proportion

Sample Proportion:

\[\hat{p} = \frac{X}{n}\]
  • X = number of successes in sample
  • n = sample size
  • p = population proportion

Mean of Sample Proportion:

\[E(\hat{p}) = p\]

Variance of Sample Proportion:

\[\text{Var}(\hat{p}) = \frac{p(1-p)}{n}\]

Standard Error of Sample Proportion:

\[\sigma_{\hat{p}} = \sqrt{\frac{p(1-p)}{n}}\]

Approximate Normal Distribution (when np ≥ 5 and n(1-p) ≥ 5):

\[Z = \frac{\hat{p} - p}{\sqrt{\frac{p(1-p)}{n}}} \sim N(0, 1)\]

Difference Between Two Sample Means

For independent samples from two populations:

\[\bar{X}_1 - \bar{X}_2\]

Mean of Difference:

\[E(\bar{X}_1 - \bar{X}_2) = \mu_1 - \mu_2\]

Variance of Difference (independent samples):

\[\text{Var}(\bar{X}_1 - \bar{X}_2) = \frac{\sigma_1^2}{n_1} + \frac{\sigma_2^2}{n_2}\]

Standard Error:

\[\sigma_{\bar{X}_1 - \bar{X}_2} = \sqrt{\frac{\sigma_1^2}{n_1} + \frac{\sigma_2^2}{n_2}}\]

Standardized Form (for normal populations or large samples):

\[Z = \frac{(\bar{X}_1 - \bar{X}_2) - (\mu_1 - \mu_2)}{\sqrt{\frac{\sigma_1^2}{n_1} + \frac{\sigma_2^2}{n_2}}} \sim N(0, 1)\]

Difference Between Two Sample Proportions

For independent samples:

\[\hat{p}_1 - \hat{p}_2\]

Mean of Difference:

\[E(\hat{p}_1 - \hat{p}_2) = p_1 - p_2\]

Variance of Difference:

\[\text{Var}(\hat{p}_1 - \hat{p}_2) = \frac{p_1(1-p_1)}{n_1} + \frac{p_2(1-p_2)}{n_2}\]

Standard Error:

\[\sigma_{\hat{p}_1 - \hat{p}_2} = \sqrt{\frac{p_1(1-p_1)}{n_1} + \frac{p_2(1-p_2)}{n_2}}\]

Standardized Form (when sample sizes satisfy conditions):

\[Z = \frac{(\hat{p}_1 - \hat{p}_2) - (p_1 - p_2)}{\sqrt{\frac{p_1(1-p_1)}{n_1} + \frac{p_2(1-p_2)}{n_2}}} \sim N(0, 1)\]

Moment Generating Functions

Definition and Properties

Moment Generating Function (MGF):

\[M_X(t) = E(e^{tX})\]

For Discrete Distribution:

\[M_X(t) = \sum_{\text{all } x} e^{tx} p(x)\]

For Continuous Distribution:

\[M_X(t) = \int_{-\infty}^{\infty} e^{tx} f(x) \, dx\]

n-th Moment:

\[E(X^n) = M_X^{(n)}(0) = \left.\frac{d^n M_X(t)}{dt^n}\right|_{t=0}\]

First Moment (Mean):

\[E(X) = M_X'(0)\]

Second Moment:

\[E(X^2) = M_X''(0)\]

Variance from MGF:

\[\text{Var}(X) = M_X''(0) - [M_X'(0)]^2\]

MGF of Linear Transformation:

\[M_{aX+b}(t) = e^{bt} M_X(at)\]

MGF of Sum of Independent Variables:

\[M_{X+Y}(t) = M_X(t) \cdot M_Y(t) \quad \text{if X and Y are independent}\]

Common MGFs

Binomial Distribution:

\[M_X(t) = [(1-p) + pe^t]^n\]

Poisson Distribution:

\[M_X(t) = e^{\lambda(e^t - 1)}\]

Normal Distribution:

\[M_X(t) = e^{\mu t + \frac{\sigma^2 t^2}{2}}\]

Exponential Distribution:

\[M_X(t) = \frac{\lambda}{\lambda - t} \quad \text{for } t < \lambda\]="">

Uniform Distribution U(a,b):

\[M_X(t) = \frac{e^{tb} - e^{ta}}{t(b-a)} \quad \text{for } t \neq 0\]

Additional Distribution Characteristics

Percentiles and Quantiles

p-th Percentile (Quantile):

\[P(X \leq x_p) = p\]
  • xp = value such that p% of distribution lies below it
  • 0 < p=""><>

Median:

\[P(X \leq \text{median}) = 0.5\]
  • 50th percentile

Quartiles:

  • Q1 = 25th percentile (first quartile)
  • Q2 = 50th percentile (median)
  • Q3 = 75th percentile (third quartile)

Interquartile Range (IQR):

\[\text{IQR} = Q_3 - Q_1\]

Skewness and Kurtosis

Coefficient of Skewness:

\[\gamma_1 = E\left[\left(\frac{X-\mu}{\sigma}\right)^3\right] = \frac{E[(X-\mu)^3]}{\sigma^3}\]
  • γ1 = 0: symmetric distribution
  • γ1 > 0: right-skewed (positive skew)
  • γ1 < 0:="" left-skewed="" (negative="">

Coefficient of Kurtosis:

\[\gamma_2 = E\left[\left(\frac{X-\mu}{\sigma}\right)^4\right] - 3 = \frac{E[(X-\mu)^4]}{\sigma^4} - 3\]
  • γ2 = 0: mesokurtic (normal-like peakedness)
  • γ2 > 0: leptokurtic (heavy-tailed, peaked)
  • γ2 < 0:="" platykurtic="" (light-tailed,="">
  • The "-3" normalizes to normal distribution

Coefficient of Variation

Coefficient of Variation (CV):

\[\text{CV} = \frac{\sigma}{\mu} \times 100\%\]
  • Dimensionless measure of relative variability
  • Expressed as percentage
  • Useful for comparing variability across different units or scales
  • Only meaningful when μ > 0

Reliability and Failure Analysis

Reliability Functions

Reliability Function:

\[R(t) = P(X > t) = 1 - F(t)\]
  • R(t) = probability of survival beyond time t
  • F(t) = cumulative distribution function
  • Also called survival function

Hazard Rate (Failure Rate):

\[h(t) = \frac{f(t)}{R(t)} = \frac{f(t)}{1-F(t)}\]
  • h(t) = instantaneous failure rate at time t
  • f(t) = probability density function

Cumulative Hazard Function:

\[H(t) = \int_0^t h(u) \, du = -\ln[R(t)]\]

Relationship Between Reliability and Hazard:

\[R(t) = e^{-H(t)} = e^{-\int_0^t h(u) \, du}\]

Mean Time to Failure (MTTF):

\[\text{MTTF} = E(X) = \int_0^{\infty} R(t) \, dt\]

Common Failure Distributions

Exponential Distribution (Constant Failure Rate):

\[h(t) = \lambda \quad \text{(constant)}\] \[R(t) = e^{-\lambda t}\] \[\text{MTTF} = \frac{1}{\lambda}\]

Weibull Distribution (Variable Failure Rate):

\[h(t) = \frac{\beta}{\alpha}\left(\frac{t}{\alpha}\right)^{\beta-1}\]
  • β < 1:="" decreasing="" failure="">
  • β = 1: constant failure rate (exponential)
  • β > 1: increasing failure rate

Conditional Probability and Distributions

Conditional Expectation

Conditional Expected Value:

\[E(X \mid Y = y) = \sum_{\text{all } x} x \cdot P(X = x \mid Y = y) \quad \text{(discrete)}\] \[E(X \mid Y = y) = \int_{-\infty}^{\infty} x \cdot f_{X|Y}(x|y) \, dx \quad \text{(continuous)}\]

Law of Total Expectation:

\[E(X) = E[E(X \mid Y)]\]

Conditional Variance:

\[\text{Var}(X \mid Y = y) = E(X^2 \mid Y = y) - [E(X \mid Y = y)]^2\]

Law of Total Variance:

\[\text{Var}(X) = E[\text{Var}(X \mid Y)] + \text{Var}[E(X \mid Y)]\]
The document Formula Sheet: Probability Distributions is a part of the PE Exam Course Engineering Fundamentals Revision for PE.
All you need of PE Exam at this link: PE Exam
Explore Courses for PE Exam exam
Get EduRev Notes directly in your Google search
Related Searches
mock tests for examination, study material, Formula Sheet: Probability Distributions, Extra Questions, MCQs, Important questions, Sample Paper, ppt, Formula Sheet: Probability Distributions, past year papers, Objective type Questions, shortcuts and tricks, video lectures, Exam, practice quizzes, Free, Viva Questions, Summary, pdf , Previous Year Questions with Solutions, Formula Sheet: Probability Distributions, Semester Notes;