Page 1
Page # 29
PROBABILITY
1. Classical (A priori) Definition of Probability :
If an experiment results in a total of (m + n) outcomes which are equally
likely and mutually exclusive with one another and if ‘m’ outcomes are
favorable to an event ‘A’ while ‘n’ are unfavorable, then the probability of
occurrence of the event ‘A’ = P(A) =
n m
m
?
=
) S ( n
) A ( n
.
We say that odds in favour of ‘A’ are m : n, while odds against ‘A’ are n : m.
) A ( P =
n m
n
?
= 1 – P(A)
2. Addition theorem of probability : P(A ?B) = P(A) + P(B) – P(A ?B)
De Morgan’s Laws :
(a) (A ? ?B)
c
= A
c
? ?B
c
(b) (A ? ?B)
c
= A
c
? ?B
c
Distributive Laws :
(a) A ? (B ? ?C) = (A ? ?B) ? (A ? ?C) (b) A ? (B ? ?C) = (A ? ?B) ? (A ? ?C)
(i) P(A or B or C) = P(A) + P(B) + P(C) – P(A ? ?B) – P(B ? ?C) – P(C ? ?A) +
P(A ? ?B ? ?C)
(ii) P (at least two of A, B, C occur) = P(B ? C) + P(C ? A) + P(A ? B)
– 2P(A ? ?B ? ?C)
(iii) P(exactly two of A, B, C occur) = P(B ? ?C) + P(C ? ?A) + P(A ? ? B)
– 3P(A ? ?B ? C)
(iv) P(exactly one of A, B, C occur) =
P(A) + P(B) + P(C) – 2P(B ? ?C) – 2P(C ? ?A) – 2P(A ? ?B) + 3P(A ? ?B ? ?C)
Page 2
Page # 29
PROBABILITY
1. Classical (A priori) Definition of Probability :
If an experiment results in a total of (m + n) outcomes which are equally
likely and mutually exclusive with one another and if ‘m’ outcomes are
favorable to an event ‘A’ while ‘n’ are unfavorable, then the probability of
occurrence of the event ‘A’ = P(A) =
n m
m
?
=
) S ( n
) A ( n
.
We say that odds in favour of ‘A’ are m : n, while odds against ‘A’ are n : m.
) A ( P =
n m
n
?
= 1 – P(A)
2. Addition theorem of probability : P(A ?B) = P(A) + P(B) – P(A ?B)
De Morgan’s Laws :
(a) (A ? ?B)
c
= A
c
? ?B
c
(b) (A ? ?B)
c
= A
c
? ?B
c
Distributive Laws :
(a) A ? (B ? ?C) = (A ? ?B) ? (A ? ?C) (b) A ? (B ? ?C) = (A ? ?B) ? (A ? ?C)
(i) P(A or B or C) = P(A) + P(B) + P(C) – P(A ? ?B) – P(B ? ?C) – P(C ? ?A) +
P(A ? ?B ? ?C)
(ii) P (at least two of A, B, C occur) = P(B ? C) + P(C ? A) + P(A ? B)
– 2P(A ? ?B ? ?C)
(iii) P(exactly two of A, B, C occur) = P(B ? ?C) + P(C ? ?A) + P(A ? ? B)
– 3P(A ? ?B ? C)
(iv) P(exactly one of A, B, C occur) =
P(A) + P(B) + P(C) – 2P(B ? ?C) – 2P(C ? ?A) – 2P(A ? ?B) + 3P(A ? ?B ? ?C)
Page # 30
3. Conditional Probability : P(A/B) =
P(B)
B) P(A ?
.
4. Binomial Probability Theorem
If an experiment is such that the probability of success or failure does not
change with trials, then the probability of getting exactly r success in n
trials of an experiment is
n
C
r
p
r
q
n – r
, where ‘p’ is the probability of a success
and q is the probability of a failure. Note that p + q = 1.
5. Expectation :
If a value M
i
is associated with a probability of p
i
, then the expectation is
given by ? p
i
M
i
.
6. Total Probability Theorem : P(A) =
?
?
n
1 i
i i
)B / A ( P . )B ( P
7. Bayes’ Theorem :
If an event A can occur with one of the n mutually exclusive and exhaustive
events B
1
, B
2
, ....., B
n
and the probabilities P(A/B
1
), P(A/B
2
) .... P(A/B
n
) are
known, then P(B
i
/ A) =
?
?
n
1 i
i i
i i
)B / A ( P . )B ( P
)B / A ( P . )B ( P
B
1
, B
2
, B
3
,........,B
n
A = (A ? B
1
) ? (A ? B
2
) ? (A ? B
3
) ? ........ ? (A ? B
n
)
P(A) = P(A ? B
1
) + P(A ? ?B
2
) + ....... + P(A ? ?B
n
) = ?
?
?
n
1 i
i
) B A ( P
8. Binomial Probability Distribution :
(i) Mean of any probability distribution of a random variable is given by :
µ =
i
i i
p
x p
?
?
= ? p
i
x
i
= np
n = number of trials
p = probability of success in each probability
q = probability of failure
(ii) Variance of a random variable is given by,
?
2
= ? ?(x
i
– µ)
2
. p
i
= ? ?p
i
x
i
2
– µ
2
= npq
Read More