Courses

# NCERT Textbook- Probability Class 10 Notes | EduRev

## Class 10 : NCERT Textbook- Probability Class 10 Notes | EduRev

``` Page 1

PROBABILITY 295
15
The theory of probabilities and the theory of errors now constitute
a formidable body of great mathematical interest and of great
practical importance.
– R.S. Woodward
15.1 Introduction
In Class IX, you have studied about experimental (or empirical) probabilities of events
which were based on the results of actual experiments. We discussed an experiment
of tossing a coin 1000 times in which the frequencies of the outcomes were as follows:
Head : 455 Tail : 545
Based on this experiment, the empirical probability of a head is
455
1000
, i.e., 0.455 and
that of getting a tail is 0.545. (Also see Example 1, Chapter 15 of Class IX Mathematics
Textbook.) Note that these probabilities are based on the results of an actual experiment
of tossing a coin 1000 times. For this reason, they are called experimental or empirical
probabilities. In fact, experimental probabilities are based on the results of actual
experiments and adequate recordings of the happening of the events.  Moreover,
these probabilities are only ‘estimates’. If we perform the same experiment for another
1000 times, we may get different data giving different probability estimates.
In Class IX, you tossed a coin many times and noted the number of times it turned up
heads (or tails) (refer to Activities 1 and 2 of Chapter 15). You also noted that as the
number of tosses of the coin increased, the experimental probability of getting a head
(or tail) came closer and closer to the number
1
2
·
Not only you, but many other
PROBABILITY
2020-21
Page 2

PROBABILITY 295
15
The theory of probabilities and the theory of errors now constitute
a formidable body of great mathematical interest and of great
practical importance.
– R.S. Woodward
15.1 Introduction
In Class IX, you have studied about experimental (or empirical) probabilities of events
which were based on the results of actual experiments. We discussed an experiment
of tossing a coin 1000 times in which the frequencies of the outcomes were as follows:
Head : 455 Tail : 545
Based on this experiment, the empirical probability of a head is
455
1000
, i.e., 0.455 and
that of getting a tail is 0.545. (Also see Example 1, Chapter 15 of Class IX Mathematics
Textbook.) Note that these probabilities are based on the results of an actual experiment
of tossing a coin 1000 times. For this reason, they are called experimental or empirical
probabilities. In fact, experimental probabilities are based on the results of actual
experiments and adequate recordings of the happening of the events.  Moreover,
these probabilities are only ‘estimates’. If we perform the same experiment for another
1000 times, we may get different data giving different probability estimates.
In Class IX, you tossed a coin many times and noted the number of times it turned up
heads (or tails) (refer to Activities 1 and 2 of Chapter 15). You also noted that as the
number of tosses of the coin increased, the experimental probability of getting a head
(or tail) came closer and closer to the number
1
2
·
Not only you, but many other
PROBABILITY
2020-21
296 MATHEMA TICS
persons from different parts of the world have done this kind of experiment and recorded
the number of heads that turned up.
For example, the eighteenth century French naturalist Comte de Buffon tossed a
coin 4040 times and got 2048 heads. The experimental probabilility of getting a head,
in this case, was
2048
4040
, i.e., 0.507. J.E. Kerrich, from Britain, recorded 5067 heads in
10000 tosses of a coin. The experimental probability of getting a head, in this case,
was
5067
0.5067
10000
=
. Statistician Karl Pearson spent some more time, making 24000
tosses of a coin. He got 12012 heads, and thus, the experimental probability of a head
obtained by him was 0.5005.
Now, suppose we ask, ‘What will the experimental probability of a head be if the
experiment is carried on upto, say, one million times? Or 10 million times? And so on?’
You would intuitively feel that as the number of tosses increases, the experimental
probability of a head (or a tail) seems to be settling down around the number 0.5 , i.e.,
1
2
, which is what we call the theoretical probability of getting a head (or getting a
tail), as you will see in the next section. In this chapter, we provide an introduction to
the theoretical (also called classical) probability of an event, and discuss simple problems
based on this concept.
15.2 Probability — A Theoretical Approach
Let us consider the following situation :
Suppose a coin is tossed at random.
When we speak of a coin, we assume it to be ‘fair’, that is, it is symmetrical so
that there is no reason for it to come down more often on one side than the other.
We call this property of the coin as being ‘unbiased’. By the phrase ‘random toss’,
we mean that the coin is allowed to fall freely without any bias or interference.
We know, in advance, that the coin can only land in one of two possible ways —
either head up or tail up (we dismiss the possibility of its ‘landing’ on its edge, which
may be possible, for example, if it falls on sand). We can reasonably assume that each
outcome, head or tail, is as likely to occur as the other . We refer to this by saying that
the outcomes head and tail, are equally likely.
2020-21
Page 3

PROBABILITY 295
15
The theory of probabilities and the theory of errors now constitute
a formidable body of great mathematical interest and of great
practical importance.
– R.S. Woodward
15.1 Introduction
In Class IX, you have studied about experimental (or empirical) probabilities of events
which were based on the results of actual experiments. We discussed an experiment
of tossing a coin 1000 times in which the frequencies of the outcomes were as follows:
Head : 455 Tail : 545
Based on this experiment, the empirical probability of a head is
455
1000
, i.e., 0.455 and
that of getting a tail is 0.545. (Also see Example 1, Chapter 15 of Class IX Mathematics
Textbook.) Note that these probabilities are based on the results of an actual experiment
of tossing a coin 1000 times. For this reason, they are called experimental or empirical
probabilities. In fact, experimental probabilities are based on the results of actual
experiments and adequate recordings of the happening of the events.  Moreover,
these probabilities are only ‘estimates’. If we perform the same experiment for another
1000 times, we may get different data giving different probability estimates.
In Class IX, you tossed a coin many times and noted the number of times it turned up
heads (or tails) (refer to Activities 1 and 2 of Chapter 15). You also noted that as the
number of tosses of the coin increased, the experimental probability of getting a head
(or tail) came closer and closer to the number
1
2
·
Not only you, but many other
PROBABILITY
2020-21
296 MATHEMA TICS
persons from different parts of the world have done this kind of experiment and recorded
the number of heads that turned up.
For example, the eighteenth century French naturalist Comte de Buffon tossed a
coin 4040 times and got 2048 heads. The experimental probabilility of getting a head,
in this case, was
2048
4040
, i.e., 0.507. J.E. Kerrich, from Britain, recorded 5067 heads in
10000 tosses of a coin. The experimental probability of getting a head, in this case,
was
5067
0.5067
10000
=
. Statistician Karl Pearson spent some more time, making 24000
tosses of a coin. He got 12012 heads, and thus, the experimental probability of a head
obtained by him was 0.5005.
Now, suppose we ask, ‘What will the experimental probability of a head be if the
experiment is carried on upto, say, one million times? Or 10 million times? And so on?’
You would intuitively feel that as the number of tosses increases, the experimental
probability of a head (or a tail) seems to be settling down around the number 0.5 , i.e.,
1
2
, which is what we call the theoretical probability of getting a head (or getting a
tail), as you will see in the next section. In this chapter, we provide an introduction to
the theoretical (also called classical) probability of an event, and discuss simple problems
based on this concept.
15.2 Probability — A Theoretical Approach
Let us consider the following situation :
Suppose a coin is tossed at random.
When we speak of a coin, we assume it to be ‘fair’, that is, it is symmetrical so
that there is no reason for it to come down more often on one side than the other.
We call this property of the coin as being ‘unbiased’. By the phrase ‘random toss’,
we mean that the coin is allowed to fall freely without any bias or interference.
We know, in advance, that the coin can only land in one of two possible ways —
either head up or tail up (we dismiss the possibility of its ‘landing’ on its edge, which
may be possible, for example, if it falls on sand). We can reasonably assume that each
outcome, head or tail, is as likely to occur as the other . We refer to this by saying that
the outcomes head and tail, are equally likely.
2020-21
PROBABILITY 297
For another example of equally likely outcomes, suppose we throw a die
once. For us, a die will always mean a fair die. What are the possible outcomes?
They are 1, 2, 3, 4, 5, 6. Each number has the same possibility of showing up. So
the equally likely outcomes of throwing a die are 1, 2, 3, 4, 5 and 6.
Are the outcomes of every experiment equally likely? Let us see.
Suppose that a bag contains 4 red balls and 1 blue ball, and you draw a ball
without looking into the bag. What are the outcomes? Are the outcomes — a red ball
and a blue ball equally likely? Since there are 4 red balls and only one blue ball, you
would agree that you are more likely to get a red ball than a blue ball. So, the outcomes
(a red ball or a blue ball) are not equally likely. However, the outcome of drawing a
ball of any colour from the bag is equally likely. So, all experiments do not necessarily
have equally likely outcomes.
However, in this chapter, from now on, we will assume that all the experiments
have equally likely outcomes.
In Class IX, we defined the experimental or empirical probability P(E) of an
event E as
P(E) =
Number of trials in which the event happened
Total number of trials
The empirical interpretation of probability can be applied to every event associated
with an experiment which can be repeated a large number of times. The requirement
of repeating an experiment has some limitations, as it may be very expensive or
unfeasible in many situations. Of course, it worked well in coin tossing or die throwing
experiments. But how about repeating the experiment of launching a satellite in order
to compute the empirical probability of its failure during launching, or the repetition of
the phenomenon of an earthquake to compute the empirical probability of a multi-
storeyed building getting destroyed in an earthquake?
In experiments where we are prepared to make certain assumptions, the repetition
of an experiment can be avoided, as the assumptions help in directly calculating the
exact (theoretical) probability. The assumption of equally likely outcomes (which is
valid in many experiments, as in the two examples above, of a coin and of a die) is one
such assumption that leads us to the following definition of probability of an event.
The theoretical probability (also called classical probability) of an event E,
written as P(E), is defined as
P(E) =
Number of outcomes favourable to E
Number of all possible outcomes of the experiment
,
2020-21
Page 4

PROBABILITY 295
15
The theory of probabilities and the theory of errors now constitute
a formidable body of great mathematical interest and of great
practical importance.
– R.S. Woodward
15.1 Introduction
In Class IX, you have studied about experimental (or empirical) probabilities of events
which were based on the results of actual experiments. We discussed an experiment
of tossing a coin 1000 times in which the frequencies of the outcomes were as follows:
Head : 455 Tail : 545
Based on this experiment, the empirical probability of a head is
455
1000
, i.e., 0.455 and
that of getting a tail is 0.545. (Also see Example 1, Chapter 15 of Class IX Mathematics
Textbook.) Note that these probabilities are based on the results of an actual experiment
of tossing a coin 1000 times. For this reason, they are called experimental or empirical
probabilities. In fact, experimental probabilities are based on the results of actual
experiments and adequate recordings of the happening of the events.  Moreover,
these probabilities are only ‘estimates’. If we perform the same experiment for another
1000 times, we may get different data giving different probability estimates.
In Class IX, you tossed a coin many times and noted the number of times it turned up
heads (or tails) (refer to Activities 1 and 2 of Chapter 15). You also noted that as the
number of tosses of the coin increased, the experimental probability of getting a head
(or tail) came closer and closer to the number
1
2
·
Not only you, but many other
PROBABILITY
2020-21
296 MATHEMA TICS
persons from different parts of the world have done this kind of experiment and recorded
the number of heads that turned up.
For example, the eighteenth century French naturalist Comte de Buffon tossed a
coin 4040 times and got 2048 heads. The experimental probabilility of getting a head,
in this case, was
2048
4040
, i.e., 0.507. J.E. Kerrich, from Britain, recorded 5067 heads in
10000 tosses of a coin. The experimental probability of getting a head, in this case,
was
5067
0.5067
10000
=
. Statistician Karl Pearson spent some more time, making 24000
tosses of a coin. He got 12012 heads, and thus, the experimental probability of a head
obtained by him was 0.5005.
Now, suppose we ask, ‘What will the experimental probability of a head be if the
experiment is carried on upto, say, one million times? Or 10 million times? And so on?’
You would intuitively feel that as the number of tosses increases, the experimental
probability of a head (or a tail) seems to be settling down around the number 0.5 , i.e.,
1
2
, which is what we call the theoretical probability of getting a head (or getting a
tail), as you will see in the next section. In this chapter, we provide an introduction to
the theoretical (also called classical) probability of an event, and discuss simple problems
based on this concept.
15.2 Probability — A Theoretical Approach
Let us consider the following situation :
Suppose a coin is tossed at random.
When we speak of a coin, we assume it to be ‘fair’, that is, it is symmetrical so
that there is no reason for it to come down more often on one side than the other.
We call this property of the coin as being ‘unbiased’. By the phrase ‘random toss’,
we mean that the coin is allowed to fall freely without any bias or interference.
We know, in advance, that the coin can only land in one of two possible ways —
either head up or tail up (we dismiss the possibility of its ‘landing’ on its edge, which
may be possible, for example, if it falls on sand). We can reasonably assume that each
outcome, head or tail, is as likely to occur as the other . We refer to this by saying that
the outcomes head and tail, are equally likely.
2020-21
PROBABILITY 297
For another example of equally likely outcomes, suppose we throw a die
once. For us, a die will always mean a fair die. What are the possible outcomes?
They are 1, 2, 3, 4, 5, 6. Each number has the same possibility of showing up. So
the equally likely outcomes of throwing a die are 1, 2, 3, 4, 5 and 6.
Are the outcomes of every experiment equally likely? Let us see.
Suppose that a bag contains 4 red balls and 1 blue ball, and you draw a ball
without looking into the bag. What are the outcomes? Are the outcomes — a red ball
and a blue ball equally likely? Since there are 4 red balls and only one blue ball, you
would agree that you are more likely to get a red ball than a blue ball. So, the outcomes
(a red ball or a blue ball) are not equally likely. However, the outcome of drawing a
ball of any colour from the bag is equally likely. So, all experiments do not necessarily
have equally likely outcomes.
However, in this chapter, from now on, we will assume that all the experiments
have equally likely outcomes.
In Class IX, we defined the experimental or empirical probability P(E) of an
event E as
P(E) =
Number of trials in which the event happened
Total number of trials
The empirical interpretation of probability can be applied to every event associated
with an experiment which can be repeated a large number of times. The requirement
of repeating an experiment has some limitations, as it may be very expensive or
unfeasible in many situations. Of course, it worked well in coin tossing or die throwing
experiments. But how about repeating the experiment of launching a satellite in order
to compute the empirical probability of its failure during launching, or the repetition of
the phenomenon of an earthquake to compute the empirical probability of a multi-
storeyed building getting destroyed in an earthquake?
In experiments where we are prepared to make certain assumptions, the repetition
of an experiment can be avoided, as the assumptions help in directly calculating the
exact (theoretical) probability. The assumption of equally likely outcomes (which is
valid in many experiments, as in the two examples above, of a coin and of a die) is one
such assumption that leads us to the following definition of probability of an event.
The theoretical probability (also called classical probability) of an event E,
written as P(E), is defined as
P(E) =
Number of outcomes favourable to E
Number of all possible outcomes of the experiment
,
2020-21
298 MATHEMA TICS
where we assume that the outcomes of the experiment are equally likely.
We will briefly refer to theoretical probability as probability.
This definition of probability was given by Pierre Simon Laplace in 1795.
Probability theory had its origin in the 16th century when
an Italian physician and mathematician J.Cardan wrote the
first book on the subject, The Book on Games of Chance.
Since its inception, the study of probability has attracted
the attention of great mathematicians. James Bernoulli
(1654 – 1705), A. de Moivre (1667 – 1754), and
Pierre Simon Laplace are among those who made significant
contributions to this field. Laplace’s Theorie Analytique
des Probabilités, 1812, is considered to be the greatest
contribution by a single person to the theory of probability.
In recent years, probability has been used extensively in
many areas such as biology, economics, genetics, physics,
sociology etc.
Let us find the probability for some of the events associated with experiments
where the equally likely assumption holds.
Example 1 : Find the probability of getting a head when a coin is tossed once. Also
find the probability of getting a tail.
Solution : In the experiment of tossing a coin once, the number of possible outcomes
is two — Head (H) and Tail (T). Let E be the event ‘getting a head’. The number of
outcomes favourable to E, (i.e., of getting a head) is 1. Therefore,
P(E) = P (head) =
Number of outcomes favourable to E
Number of all possible outcomes
=
1
2
Similarly, if F is the event ‘getting a tail’, then
P(F) = P(tail) =
1
2
(Why ?)
Example 2 : A bag contains a red ball, a blue ball and a yellow ball, all the balls being
of the same size. Kritika takes out a ball from the bag without looking into it. What is
the probability that she takes out the
(i) yellow ball? (ii) red ball? (iii) blue ball?
Pierre  Simon Laplace
(1749 – 1827)
2020-21
Page 5

PROBABILITY 295
15
The theory of probabilities and the theory of errors now constitute
a formidable body of great mathematical interest and of great
practical importance.
– R.S. Woodward
15.1 Introduction
In Class IX, you have studied about experimental (or empirical) probabilities of events
which were based on the results of actual experiments. We discussed an experiment
of tossing a coin 1000 times in which the frequencies of the outcomes were as follows:
Head : 455 Tail : 545
Based on this experiment, the empirical probability of a head is
455
1000
, i.e., 0.455 and
that of getting a tail is 0.545. (Also see Example 1, Chapter 15 of Class IX Mathematics
Textbook.) Note that these probabilities are based on the results of an actual experiment
of tossing a coin 1000 times. For this reason, they are called experimental or empirical
probabilities. In fact, experimental probabilities are based on the results of actual
experiments and adequate recordings of the happening of the events.  Moreover,
these probabilities are only ‘estimates’. If we perform the same experiment for another
1000 times, we may get different data giving different probability estimates.
In Class IX, you tossed a coin many times and noted the number of times it turned up
heads (or tails) (refer to Activities 1 and 2 of Chapter 15). You also noted that as the
number of tosses of the coin increased, the experimental probability of getting a head
(or tail) came closer and closer to the number
1
2
·
Not only you, but many other
PROBABILITY
2020-21
296 MATHEMA TICS
persons from different parts of the world have done this kind of experiment and recorded
the number of heads that turned up.
For example, the eighteenth century French naturalist Comte de Buffon tossed a
coin 4040 times and got 2048 heads. The experimental probabilility of getting a head,
in this case, was
2048
4040
, i.e., 0.507. J.E. Kerrich, from Britain, recorded 5067 heads in
10000 tosses of a coin. The experimental probability of getting a head, in this case,
was
5067
0.5067
10000
=
. Statistician Karl Pearson spent some more time, making 24000
tosses of a coin. He got 12012 heads, and thus, the experimental probability of a head
obtained by him was 0.5005.
Now, suppose we ask, ‘What will the experimental probability of a head be if the
experiment is carried on upto, say, one million times? Or 10 million times? And so on?’
You would intuitively feel that as the number of tosses increases, the experimental
probability of a head (or a tail) seems to be settling down around the number 0.5 , i.e.,
1
2
, which is what we call the theoretical probability of getting a head (or getting a
tail), as you will see in the next section. In this chapter, we provide an introduction to
the theoretical (also called classical) probability of an event, and discuss simple problems
based on this concept.
15.2 Probability — A Theoretical Approach
Let us consider the following situation :
Suppose a coin is tossed at random.
When we speak of a coin, we assume it to be ‘fair’, that is, it is symmetrical so
that there is no reason for it to come down more often on one side than the other.
We call this property of the coin as being ‘unbiased’. By the phrase ‘random toss’,
we mean that the coin is allowed to fall freely without any bias or interference.
We know, in advance, that the coin can only land in one of two possible ways —
either head up or tail up (we dismiss the possibility of its ‘landing’ on its edge, which
may be possible, for example, if it falls on sand). We can reasonably assume that each
outcome, head or tail, is as likely to occur as the other . We refer to this by saying that
the outcomes head and tail, are equally likely.
2020-21
PROBABILITY 297
For another example of equally likely outcomes, suppose we throw a die
once. For us, a die will always mean a fair die. What are the possible outcomes?
They are 1, 2, 3, 4, 5, 6. Each number has the same possibility of showing up. So
the equally likely outcomes of throwing a die are 1, 2, 3, 4, 5 and 6.
Are the outcomes of every experiment equally likely? Let us see.
Suppose that a bag contains 4 red balls and 1 blue ball, and you draw a ball
without looking into the bag. What are the outcomes? Are the outcomes — a red ball
and a blue ball equally likely? Since there are 4 red balls and only one blue ball, you
would agree that you are more likely to get a red ball than a blue ball. So, the outcomes
(a red ball or a blue ball) are not equally likely. However, the outcome of drawing a
ball of any colour from the bag is equally likely. So, all experiments do not necessarily
have equally likely outcomes.
However, in this chapter, from now on, we will assume that all the experiments
have equally likely outcomes.
In Class IX, we defined the experimental or empirical probability P(E) of an
event E as
P(E) =
Number of trials in which the event happened
Total number of trials
The empirical interpretation of probability can be applied to every event associated
with an experiment which can be repeated a large number of times. The requirement
of repeating an experiment has some limitations, as it may be very expensive or
unfeasible in many situations. Of course, it worked well in coin tossing or die throwing
experiments. But how about repeating the experiment of launching a satellite in order
to compute the empirical probability of its failure during launching, or the repetition of
the phenomenon of an earthquake to compute the empirical probability of a multi-
storeyed building getting destroyed in an earthquake?
In experiments where we are prepared to make certain assumptions, the repetition
of an experiment can be avoided, as the assumptions help in directly calculating the
exact (theoretical) probability. The assumption of equally likely outcomes (which is
valid in many experiments, as in the two examples above, of a coin and of a die) is one
such assumption that leads us to the following definition of probability of an event.
The theoretical probability (also called classical probability) of an event E,
written as P(E), is defined as
P(E) =
Number of outcomes favourable to E
Number of all possible outcomes of the experiment
,
2020-21
298 MATHEMA TICS
where we assume that the outcomes of the experiment are equally likely.
We will briefly refer to theoretical probability as probability.
This definition of probability was given by Pierre Simon Laplace in 1795.
Probability theory had its origin in the 16th century when
an Italian physician and mathematician J.Cardan wrote the
first book on the subject, The Book on Games of Chance.
Since its inception, the study of probability has attracted
the attention of great mathematicians. James Bernoulli
(1654 – 1705), A. de Moivre (1667 – 1754), and
Pierre Simon Laplace are among those who made significant
contributions to this field. Laplace’s Theorie Analytique
des Probabilités, 1812, is considered to be the greatest
contribution by a single person to the theory of probability.
In recent years, probability has been used extensively in
many areas such as biology, economics, genetics, physics,
sociology etc.
Let us find the probability for some of the events associated with experiments
where the equally likely assumption holds.
Example 1 : Find the probability of getting a head when a coin is tossed once. Also
find the probability of getting a tail.
Solution : In the experiment of tossing a coin once, the number of possible outcomes
is two — Head (H) and Tail (T). Let E be the event ‘getting a head’. The number of
outcomes favourable to E, (i.e., of getting a head) is 1. Therefore,
P(E) = P (head) =
Number of outcomes favourable to E
Number of all possible outcomes
=
1
2
Similarly, if F is the event ‘getting a tail’, then
P(F) = P(tail) =
1
2
(Why ?)
Example 2 : A bag contains a red ball, a blue ball and a yellow ball, all the balls being
of the same size. Kritika takes out a ball from the bag without looking into it. What is
the probability that she takes out the
(i) yellow ball? (ii) red ball? (iii) blue ball?
Pierre  Simon Laplace
(1749 – 1827)
2020-21
PROBABILITY 299
Solution : Kritika takes out a ball from the bag without looking into it. So, it is equally
likely that she takes out any one of them.
Let Y be the event ‘the ball taken out is yellow’, B be the event ‘the ball taken
out is blue’, and R be the event ‘the ball taken out is red’.
Now, the number of possible outcomes = 3.
(i) The number of outcomes favourable to the event Y = 1.
So, P(Y) =
1
3
Similarly , (ii) P(R) =
1
3
and (iii) P(B) =
1
3
·
Remarks :
1. An event having only one outcome of the experiment is called an elementary
event. In Example 1, both the events E and F are elementary events. Similarly, in
Example 2, all the three events, Y , B and R are elementary events.
2. In Example 1, we note that : P(E) + P(F) = 1
In Example 2, we note that : P(Y) + P(R) + P(B) = 1
Observe that the sum of the probabilities of all the elementary events of
an experiment is 1. This is true in general also.
Example 3 : Suppose we throw a die once. (i) What is the probability of getting a
number greater than 4 ? (ii) What is the probability of getting a number less than or
equal to 4 ?
Solution : (i) Here, let E be the event ‘getting a number greater than 4’. The number
of possible outcomes is six : 1, 2, 3, 4, 5 and 6, and the outcomes favourable to E are 5
and 6. Therefore, the number of outcomes favourable to E is 2. So,
P(E) = P(number greater than 4) =
2
6
=
1
3
(ii) Let F be the event ‘getting a number less than or equal to 4’.
Number of possible outcomes = 6
Outcomes favourable to the event F are 1, 2, 3, 4.
So, the number of outcomes favourable to F is 4.
Therefore, P(F) =
4
6
=
2
3
2020-21
```
Offer running on EduRev: Apply code STAYHOME200 to get INR 200 off on our premium plan EduRev Infinity!

## Mathematics (Maths) Class 10

62 videos|362 docs|103 tests

,

,

,

,

,

,

,

,

,

,

,

,

,

,

,

,

,

,

,

,

,

;