Page 1
Multivariate Distributions
All of the most interesting problems in statistics involve looking at more than a single
measurement at a time, at relationships among measurements and comparisons between
them. In order to permit us to address such problems, indeed to even formulate them
properly, we will need to enlarge our mathematical structure to include multivariate
distributions, the probability distributions of pairs of random variables, triplets of
random variables, and so forth. We will begin with the simplest such situation, that of
pairs of random variables or bivariate distributions, where we will already encounter
most of the key ideas.
3.1 Discrete Bivariate Distributions.
If ?? and ?? are two random variables defined on the same sample space ?? ; that is, defined
in reference to the same experiment, so that it is both meaningful and potentially
interesting to consider how they may interact or affect one another, we will define their
bivariate probability function by
?? (?? ,?? )=?? (?? =?? and ?? =?? ) (3.1)
In a direct analogy to the case of a single random variable (the univariate case), ?? (?? ,?? )
may be thought of as describing the distribution of a unit mass in the (?? ,?? ) plane, with
?? (?? ,?? ) representing the mass assigned to the point (?? ,?? ) , considered as a spike at (?? ,?? )
of height ?? (?? ,?? ) . The total for all possible points must be one:
??
all ?? ???
all ?? ??? (?? ,?? )=1 (3.2)
[Figure 3.1]
Example 3.A. Consider the experiment of tossing a fair coin three times, and then,
independently of the first coin, tossing a second fair coin three times. Let
?? =# Heads for the first coin
?? =# Tails for the second coin
?? = \#Tails for the first coin.
The two coins are tossed independently, so for any pair of possible values (?? ,?? ) of ?? and
?? we have, if {?? =?? } stands for the event " ?? =?? ",
?? (?? ,?? ) =?? (?? =?? and ?? =?? )
=?? ({?? =?? }n{?? =?? })
=?? ({?? =?? })·?? ({?? =?? })
=?? ?? (?? )·?? ?? (?? )
On the other hand, ?? and ?? refer to the same coin, and so
Page 2
Multivariate Distributions
All of the most interesting problems in statistics involve looking at more than a single
measurement at a time, at relationships among measurements and comparisons between
them. In order to permit us to address such problems, indeed to even formulate them
properly, we will need to enlarge our mathematical structure to include multivariate
distributions, the probability distributions of pairs of random variables, triplets of
random variables, and so forth. We will begin with the simplest such situation, that of
pairs of random variables or bivariate distributions, where we will already encounter
most of the key ideas.
3.1 Discrete Bivariate Distributions.
If ?? and ?? are two random variables defined on the same sample space ?? ; that is, defined
in reference to the same experiment, so that it is both meaningful and potentially
interesting to consider how they may interact or affect one another, we will define their
bivariate probability function by
?? (?? ,?? )=?? (?? =?? and ?? =?? ) (3.1)
In a direct analogy to the case of a single random variable (the univariate case), ?? (?? ,?? )
may be thought of as describing the distribution of a unit mass in the (?? ,?? ) plane, with
?? (?? ,?? ) representing the mass assigned to the point (?? ,?? ) , considered as a spike at (?? ,?? )
of height ?? (?? ,?? ) . The total for all possible points must be one:
??
all ?? ???
all ?? ??? (?? ,?? )=1 (3.2)
[Figure 3.1]
Example 3.A. Consider the experiment of tossing a fair coin three times, and then,
independently of the first coin, tossing a second fair coin three times. Let
?? =# Heads for the first coin
?? =# Tails for the second coin
?? = \#Tails for the first coin.
The two coins are tossed independently, so for any pair of possible values (?? ,?? ) of ?? and
?? we have, if {?? =?? } stands for the event " ?? =?? ",
?? (?? ,?? ) =?? (?? =?? and ?? =?? )
=?? ({?? =?? }n{?? =?? })
=?? ({?? =?? })·?? ({?? =?? })
=?? ?? (?? )·?? ?? (?? )
On the other hand, ?? and ?? refer to the same coin, and so
?? (?? ,?? ) =?? (?? =?? and ?? =?? )
=?? ({?? =?? }n{?? =?? })
=?? ({?? =?? })=?? ?? (?? ) if ?? =3-?? =0 otherwise.
This is because we must necessarily have ?? +?? =3, which means {?? =?? } and {?? =?? -
3} describe the same event. If ?? ?3-?? , then {?? =?? } and {?? =?? } are mutually exclusive
and the probability both occur
is zero. These bivariate distributions can be summarized in the form of tables, whose
entries are ?? (?? ,?? ) and ?? (?? ,?? ) respectively:
Now, if we have specified a bivariate probability function such as ?? (?? ,?? ) , we can always
deduce the respective univariate distributions from it, by addition:
?? ?? (?? ) =??
all ?? ??? (?? ,?? ) (3.3)
?? ?? (?? ) =??
all ?? ??? (?? ,?? ) (3.4)
The rationale for these formulae is that we can decompose the event {?? =?? } into a
collection of smaller sets of outcomes. For example,
{?? =?? }={?? =?? and ?? =0}?{?? =?? and ?? =1}??
??{?? =?? and ?? =23}??
where the values of ?? on the righthand side run through all possible values of ?? . But then
the events of the righthand side are mutually exclusive ( ?? cannot have two values at
once), so the probability of the righthand side is the sum of the events' probabilities, or
?
all ?? ??? (?? ,?? ) , while the lefthand side has probability ?? ?? (?? ) . When we refer to these
univariate distributions in a multivariate context, we shall call them the marginal
probability functions of ?? and ?? . This name comes from the fact that when the addition
in (3.3) or (3.4)
?? ??
Page 3
Multivariate Distributions
All of the most interesting problems in statistics involve looking at more than a single
measurement at a time, at relationships among measurements and comparisons between
them. In order to permit us to address such problems, indeed to even formulate them
properly, we will need to enlarge our mathematical structure to include multivariate
distributions, the probability distributions of pairs of random variables, triplets of
random variables, and so forth. We will begin with the simplest such situation, that of
pairs of random variables or bivariate distributions, where we will already encounter
most of the key ideas.
3.1 Discrete Bivariate Distributions.
If ?? and ?? are two random variables defined on the same sample space ?? ; that is, defined
in reference to the same experiment, so that it is both meaningful and potentially
interesting to consider how they may interact or affect one another, we will define their
bivariate probability function by
?? (?? ,?? )=?? (?? =?? and ?? =?? ) (3.1)
In a direct analogy to the case of a single random variable (the univariate case), ?? (?? ,?? )
may be thought of as describing the distribution of a unit mass in the (?? ,?? ) plane, with
?? (?? ,?? ) representing the mass assigned to the point (?? ,?? ) , considered as a spike at (?? ,?? )
of height ?? (?? ,?? ) . The total for all possible points must be one:
??
all ?? ???
all ?? ??? (?? ,?? )=1 (3.2)
[Figure 3.1]
Example 3.A. Consider the experiment of tossing a fair coin three times, and then,
independently of the first coin, tossing a second fair coin three times. Let
?? =# Heads for the first coin
?? =# Tails for the second coin
?? = \#Tails for the first coin.
The two coins are tossed independently, so for any pair of possible values (?? ,?? ) of ?? and
?? we have, if {?? =?? } stands for the event " ?? =?? ",
?? (?? ,?? ) =?? (?? =?? and ?? =?? )
=?? ({?? =?? }n{?? =?? })
=?? ({?? =?? })·?? ({?? =?? })
=?? ?? (?? )·?? ?? (?? )
On the other hand, ?? and ?? refer to the same coin, and so
?? (?? ,?? ) =?? (?? =?? and ?? =?? )
=?? ({?? =?? }n{?? =?? })
=?? ({?? =?? })=?? ?? (?? ) if ?? =3-?? =0 otherwise.
This is because we must necessarily have ?? +?? =3, which means {?? =?? } and {?? =?? -
3} describe the same event. If ?? ?3-?? , then {?? =?? } and {?? =?? } are mutually exclusive
and the probability both occur
is zero. These bivariate distributions can be summarized in the form of tables, whose
entries are ?? (?? ,?? ) and ?? (?? ,?? ) respectively:
Now, if we have specified a bivariate probability function such as ?? (?? ,?? ) , we can always
deduce the respective univariate distributions from it, by addition:
?? ?? (?? ) =??
all ?? ??? (?? ,?? ) (3.3)
?? ?? (?? ) =??
all ?? ??? (?? ,?? ) (3.4)
The rationale for these formulae is that we can decompose the event {?? =?? } into a
collection of smaller sets of outcomes. For example,
{?? =?? }={?? =?? and ?? =0}?{?? =?? and ?? =1}??
??{?? =?? and ?? =23}??
where the values of ?? on the righthand side run through all possible values of ?? . But then
the events of the righthand side are mutually exclusive ( ?? cannot have two values at
once), so the probability of the righthand side is the sum of the events' probabilities, or
?
all ?? ??? (?? ,?? ) , while the lefthand side has probability ?? ?? (?? ) . When we refer to these
univariate distributions in a multivariate context, we shall call them the marginal
probability functions of ?? and ?? . This name comes from the fact that when the addition
in (3.3) or (3.4)
?? ??
0 1 2 3 ?? ?? (?? ) 0 1 2 3 ?? ?? (?? )
0
1
64
3
64
3
64
¯¯¯¯
1
64
1
8
0 0 0 0
1
8
1
8
1
3
64
9
64
9
64
3
64
3
8
?? 1 0 0
3
8
0
3
8
2
3
64
9
64
9
64
3
64
3
8
2 0
3
8
0 0
3
8
3
1
64
3
64
3
64
1
64
1
8
3
1
8
0 0 0
1
8
?? ?? (?? )
¯¯¯¯¯¯¯
1
8
3
8
3
8
1
8
?? ?? (?? )
¯¯¯¯¯¯¯
1
8
3
8
3
8
1
8
is performed upon a bivariate distribution ?? (?? ,?? ) written in tabular form, the results are
most naturally written in the margins of the table.
Example 3.A (continued). For our coin example, we have the marginal distributions of
?? ,?? , and ?? :
This example highlights an important fact: you can always find the marginal
distributions from the bivariate distribution, but in general you cannot go the other way:
you cannot reconstruct the interior of a table (the bivariate distribution) knowing only
the marginal totals. In this example, both tables have exactly the same marginal totals, in
fact ?? ,?? , and ?? all have the same Binomial (3,
1
2
) distribution, but
the bivariate distributions are quite different. The marginal distributions ?? ?? (?? ) and
?? ?? (?? ) may describe our uncertainty about the possible values, respectively, of ??
considered separately, without regard to whether or not ?? is even observed, and of ??
considered separately, without regard to whether or not ?? is even observed. But they
cannot tell us about the relationship between ?? and ?? , they alone cannot tell us whether
?? and ?? refer to the same coin or to different coins. However, the example also gives a
hint as to just what sort of information is needed to build up a bivariate distribution from
component parts. In one case the knowledge that the two coins were independent gave
us ?? (?? ,?? )=?? ?? (?? )·?? ?? (?? ) ; in the other case the complete dependence of ?? on ?? gave us
?? (?? ,?? )=?? ?? (?? ) or 0 as ?? =3-?? or not. What was needed was information about how
the knowledge of one random variable's outcome may affect the other: conditional
information. We formalize this as a conditional probability function, defined by
?? (?? |?? )=?? (?? =?? |?? =?? ) (3.5)
which we read as "the probability that ?? =?? given that ?? =?? ." Since " ?? =?? " and " ?? =
?? " are events, this is just our earlier notion of conditional probability re-expressed for
discrete random variables, and from (1.7) we have that
Page 4
Multivariate Distributions
All of the most interesting problems in statistics involve looking at more than a single
measurement at a time, at relationships among measurements and comparisons between
them. In order to permit us to address such problems, indeed to even formulate them
properly, we will need to enlarge our mathematical structure to include multivariate
distributions, the probability distributions of pairs of random variables, triplets of
random variables, and so forth. We will begin with the simplest such situation, that of
pairs of random variables or bivariate distributions, where we will already encounter
most of the key ideas.
3.1 Discrete Bivariate Distributions.
If ?? and ?? are two random variables defined on the same sample space ?? ; that is, defined
in reference to the same experiment, so that it is both meaningful and potentially
interesting to consider how they may interact or affect one another, we will define their
bivariate probability function by
?? (?? ,?? )=?? (?? =?? and ?? =?? ) (3.1)
In a direct analogy to the case of a single random variable (the univariate case), ?? (?? ,?? )
may be thought of as describing the distribution of a unit mass in the (?? ,?? ) plane, with
?? (?? ,?? ) representing the mass assigned to the point (?? ,?? ) , considered as a spike at (?? ,?? )
of height ?? (?? ,?? ) . The total for all possible points must be one:
??
all ?? ???
all ?? ??? (?? ,?? )=1 (3.2)
[Figure 3.1]
Example 3.A. Consider the experiment of tossing a fair coin three times, and then,
independently of the first coin, tossing a second fair coin three times. Let
?? =# Heads for the first coin
?? =# Tails for the second coin
?? = \#Tails for the first coin.
The two coins are tossed independently, so for any pair of possible values (?? ,?? ) of ?? and
?? we have, if {?? =?? } stands for the event " ?? =?? ",
?? (?? ,?? ) =?? (?? =?? and ?? =?? )
=?? ({?? =?? }n{?? =?? })
=?? ({?? =?? })·?? ({?? =?? })
=?? ?? (?? )·?? ?? (?? )
On the other hand, ?? and ?? refer to the same coin, and so
?? (?? ,?? ) =?? (?? =?? and ?? =?? )
=?? ({?? =?? }n{?? =?? })
=?? ({?? =?? })=?? ?? (?? ) if ?? =3-?? =0 otherwise.
This is because we must necessarily have ?? +?? =3, which means {?? =?? } and {?? =?? -
3} describe the same event. If ?? ?3-?? , then {?? =?? } and {?? =?? } are mutually exclusive
and the probability both occur
is zero. These bivariate distributions can be summarized in the form of tables, whose
entries are ?? (?? ,?? ) and ?? (?? ,?? ) respectively:
Now, if we have specified a bivariate probability function such as ?? (?? ,?? ) , we can always
deduce the respective univariate distributions from it, by addition:
?? ?? (?? ) =??
all ?? ??? (?? ,?? ) (3.3)
?? ?? (?? ) =??
all ?? ??? (?? ,?? ) (3.4)
The rationale for these formulae is that we can decompose the event {?? =?? } into a
collection of smaller sets of outcomes. For example,
{?? =?? }={?? =?? and ?? =0}?{?? =?? and ?? =1}??
??{?? =?? and ?? =23}??
where the values of ?? on the righthand side run through all possible values of ?? . But then
the events of the righthand side are mutually exclusive ( ?? cannot have two values at
once), so the probability of the righthand side is the sum of the events' probabilities, or
?
all ?? ??? (?? ,?? ) , while the lefthand side has probability ?? ?? (?? ) . When we refer to these
univariate distributions in a multivariate context, we shall call them the marginal
probability functions of ?? and ?? . This name comes from the fact that when the addition
in (3.3) or (3.4)
?? ??
0 1 2 3 ?? ?? (?? ) 0 1 2 3 ?? ?? (?? )
0
1
64
3
64
3
64
¯¯¯¯
1
64
1
8
0 0 0 0
1
8
1
8
1
3
64
9
64
9
64
3
64
3
8
?? 1 0 0
3
8
0
3
8
2
3
64
9
64
9
64
3
64
3
8
2 0
3
8
0 0
3
8
3
1
64
3
64
3
64
1
64
1
8
3
1
8
0 0 0
1
8
?? ?? (?? )
¯¯¯¯¯¯¯
1
8
3
8
3
8
1
8
?? ?? (?? )
¯¯¯¯¯¯¯
1
8
3
8
3
8
1
8
is performed upon a bivariate distribution ?? (?? ,?? ) written in tabular form, the results are
most naturally written in the margins of the table.
Example 3.A (continued). For our coin example, we have the marginal distributions of
?? ,?? , and ?? :
This example highlights an important fact: you can always find the marginal
distributions from the bivariate distribution, but in general you cannot go the other way:
you cannot reconstruct the interior of a table (the bivariate distribution) knowing only
the marginal totals. In this example, both tables have exactly the same marginal totals, in
fact ?? ,?? , and ?? all have the same Binomial (3,
1
2
) distribution, but
the bivariate distributions are quite different. The marginal distributions ?? ?? (?? ) and
?? ?? (?? ) may describe our uncertainty about the possible values, respectively, of ??
considered separately, without regard to whether or not ?? is even observed, and of ??
considered separately, without regard to whether or not ?? is even observed. But they
cannot tell us about the relationship between ?? and ?? , they alone cannot tell us whether
?? and ?? refer to the same coin or to different coins. However, the example also gives a
hint as to just what sort of information is needed to build up a bivariate distribution from
component parts. In one case the knowledge that the two coins were independent gave
us ?? (?? ,?? )=?? ?? (?? )·?? ?? (?? ) ; in the other case the complete dependence of ?? on ?? gave us
?? (?? ,?? )=?? ?? (?? ) or 0 as ?? =3-?? or not. What was needed was information about how
the knowledge of one random variable's outcome may affect the other: conditional
information. We formalize this as a conditional probability function, defined by
?? (?? |?? )=?? (?? =?? |?? =?? ) (3.5)
which we read as "the probability that ?? =?? given that ?? =?? ." Since " ?? =?? " and " ?? =
?? " are events, this is just our earlier notion of conditional probability re-expressed for
discrete random variables, and from (1.7) we have that
?? (?? |?? ) =?? (?? =?? |?? =?? ) (3.6)
=
?? (?? =?? and ?? =?? )
?? (?? =?? )
(3.6)
=
?? (?? ,?? )
?? ?? (?? )
(3.6)
as long as ?? ?? (?? )>0, with ?? (?? |?? ) undefined for any ?? with ?? ?? (?? )=0.
If ?? (?? |?? )=?? ?? (?? ) for all possible pairs of values (?? ,?? ) for which ?? (?? |?? ) is defined, we
say ?? and ?? are independent variables. From (3.6), we would equivalently have that ??
and ?? are independent random variables if
?? (?? ,?? )=?? ?? (?? )·?? ?? (?? ), for all ?? ,?? (3.7)
Thus ?? and ?? are independent only if all pairs of events " ?? =?? " and " ?? =?? " are
independent; if (3.7) should fail to hold for even a single pair (?? ?? ,?? ?? ),?? and ?? would be
dependent. In Example 3.A, ?? and ?? are independent, but ?? and ?? are dependent. For
example, for ?? =2,?? (?? |?? ) is given by
?? (?? |2) =
?? (2,?? )
?? ?? (2)
=1 if ?? =1
=0 otherwise
so ?? (?? |?? )??? ?? (?? ) for ?? =2,?? =1 in particular (and for all other values as well).
By using (3.6) in the form
?? (?? ,?? )=?? ?? (?? )?? (?? |?? ) for all ?? ,?? (3.8)
it is possible to construct a bivariate distribution from two components: either marginal
distribution and the conditional distribution of the other variable given the one whose
marginal distribution is specified. Thus while marginal distributions are themselves
insufficient to build a bivariate distribution, the conditional probability function captures
exactly what additional information is needed.
3.2 Continuous Bivariate Distributions.
The distribution of a pair of continuous random variables ?? and ?? defined on the same
sample space (that is, in reference to the same experiment) is given formally by an
extension of the device used in the univariate case, a density function. If we think of the
pair (?? ,?? ) as a random point in the plane, the bivariate probability density function
?? (?? ,?? ) describes a surface in 3-dimensional space, and the probability that (?? ,?? ) falls in
a region in the plane is given by the volume over that region and under the surface
?? (?? ,?? ) . Since volumes are given as double integrals, the rectangular region with ?? <?? <
?? and ?? <?? <?? has probability
?? (?? <?? <?? and ?? <?? <?? )=? ?
?? ?? ?? ?
?? ?? ??? (?? ,?? )???????? (3.9)
Page 5
Multivariate Distributions
All of the most interesting problems in statistics involve looking at more than a single
measurement at a time, at relationships among measurements and comparisons between
them. In order to permit us to address such problems, indeed to even formulate them
properly, we will need to enlarge our mathematical structure to include multivariate
distributions, the probability distributions of pairs of random variables, triplets of
random variables, and so forth. We will begin with the simplest such situation, that of
pairs of random variables or bivariate distributions, where we will already encounter
most of the key ideas.
3.1 Discrete Bivariate Distributions.
If ?? and ?? are two random variables defined on the same sample space ?? ; that is, defined
in reference to the same experiment, so that it is both meaningful and potentially
interesting to consider how they may interact or affect one another, we will define their
bivariate probability function by
?? (?? ,?? )=?? (?? =?? and ?? =?? ) (3.1)
In a direct analogy to the case of a single random variable (the univariate case), ?? (?? ,?? )
may be thought of as describing the distribution of a unit mass in the (?? ,?? ) plane, with
?? (?? ,?? ) representing the mass assigned to the point (?? ,?? ) , considered as a spike at (?? ,?? )
of height ?? (?? ,?? ) . The total for all possible points must be one:
??
all ?? ???
all ?? ??? (?? ,?? )=1 (3.2)
[Figure 3.1]
Example 3.A. Consider the experiment of tossing a fair coin three times, and then,
independently of the first coin, tossing a second fair coin three times. Let
?? =# Heads for the first coin
?? =# Tails for the second coin
?? = \#Tails for the first coin.
The two coins are tossed independently, so for any pair of possible values (?? ,?? ) of ?? and
?? we have, if {?? =?? } stands for the event " ?? =?? ",
?? (?? ,?? ) =?? (?? =?? and ?? =?? )
=?? ({?? =?? }n{?? =?? })
=?? ({?? =?? })·?? ({?? =?? })
=?? ?? (?? )·?? ?? (?? )
On the other hand, ?? and ?? refer to the same coin, and so
?? (?? ,?? ) =?? (?? =?? and ?? =?? )
=?? ({?? =?? }n{?? =?? })
=?? ({?? =?? })=?? ?? (?? ) if ?? =3-?? =0 otherwise.
This is because we must necessarily have ?? +?? =3, which means {?? =?? } and {?? =?? -
3} describe the same event. If ?? ?3-?? , then {?? =?? } and {?? =?? } are mutually exclusive
and the probability both occur
is zero. These bivariate distributions can be summarized in the form of tables, whose
entries are ?? (?? ,?? ) and ?? (?? ,?? ) respectively:
Now, if we have specified a bivariate probability function such as ?? (?? ,?? ) , we can always
deduce the respective univariate distributions from it, by addition:
?? ?? (?? ) =??
all ?? ??? (?? ,?? ) (3.3)
?? ?? (?? ) =??
all ?? ??? (?? ,?? ) (3.4)
The rationale for these formulae is that we can decompose the event {?? =?? } into a
collection of smaller sets of outcomes. For example,
{?? =?? }={?? =?? and ?? =0}?{?? =?? and ?? =1}??
??{?? =?? and ?? =23}??
where the values of ?? on the righthand side run through all possible values of ?? . But then
the events of the righthand side are mutually exclusive ( ?? cannot have two values at
once), so the probability of the righthand side is the sum of the events' probabilities, or
?
all ?? ??? (?? ,?? ) , while the lefthand side has probability ?? ?? (?? ) . When we refer to these
univariate distributions in a multivariate context, we shall call them the marginal
probability functions of ?? and ?? . This name comes from the fact that when the addition
in (3.3) or (3.4)
?? ??
0 1 2 3 ?? ?? (?? ) 0 1 2 3 ?? ?? (?? )
0
1
64
3
64
3
64
¯¯¯¯
1
64
1
8
0 0 0 0
1
8
1
8
1
3
64
9
64
9
64
3
64
3
8
?? 1 0 0
3
8
0
3
8
2
3
64
9
64
9
64
3
64
3
8
2 0
3
8
0 0
3
8
3
1
64
3
64
3
64
1
64
1
8
3
1
8
0 0 0
1
8
?? ?? (?? )
¯¯¯¯¯¯¯
1
8
3
8
3
8
1
8
?? ?? (?? )
¯¯¯¯¯¯¯
1
8
3
8
3
8
1
8
is performed upon a bivariate distribution ?? (?? ,?? ) written in tabular form, the results are
most naturally written in the margins of the table.
Example 3.A (continued). For our coin example, we have the marginal distributions of
?? ,?? , and ?? :
This example highlights an important fact: you can always find the marginal
distributions from the bivariate distribution, but in general you cannot go the other way:
you cannot reconstruct the interior of a table (the bivariate distribution) knowing only
the marginal totals. In this example, both tables have exactly the same marginal totals, in
fact ?? ,?? , and ?? all have the same Binomial (3,
1
2
) distribution, but
the bivariate distributions are quite different. The marginal distributions ?? ?? (?? ) and
?? ?? (?? ) may describe our uncertainty about the possible values, respectively, of ??
considered separately, without regard to whether or not ?? is even observed, and of ??
considered separately, without regard to whether or not ?? is even observed. But they
cannot tell us about the relationship between ?? and ?? , they alone cannot tell us whether
?? and ?? refer to the same coin or to different coins. However, the example also gives a
hint as to just what sort of information is needed to build up a bivariate distribution from
component parts. In one case the knowledge that the two coins were independent gave
us ?? (?? ,?? )=?? ?? (?? )·?? ?? (?? ) ; in the other case the complete dependence of ?? on ?? gave us
?? (?? ,?? )=?? ?? (?? ) or 0 as ?? =3-?? or not. What was needed was information about how
the knowledge of one random variable's outcome may affect the other: conditional
information. We formalize this as a conditional probability function, defined by
?? (?? |?? )=?? (?? =?? |?? =?? ) (3.5)
which we read as "the probability that ?? =?? given that ?? =?? ." Since " ?? =?? " and " ?? =
?? " are events, this is just our earlier notion of conditional probability re-expressed for
discrete random variables, and from (1.7) we have that
?? (?? |?? ) =?? (?? =?? |?? =?? ) (3.6)
=
?? (?? =?? and ?? =?? )
?? (?? =?? )
(3.6)
=
?? (?? ,?? )
?? ?? (?? )
(3.6)
as long as ?? ?? (?? )>0, with ?? (?? |?? ) undefined for any ?? with ?? ?? (?? )=0.
If ?? (?? |?? )=?? ?? (?? ) for all possible pairs of values (?? ,?? ) for which ?? (?? |?? ) is defined, we
say ?? and ?? are independent variables. From (3.6), we would equivalently have that ??
and ?? are independent random variables if
?? (?? ,?? )=?? ?? (?? )·?? ?? (?? ), for all ?? ,?? (3.7)
Thus ?? and ?? are independent only if all pairs of events " ?? =?? " and " ?? =?? " are
independent; if (3.7) should fail to hold for even a single pair (?? ?? ,?? ?? ),?? and ?? would be
dependent. In Example 3.A, ?? and ?? are independent, but ?? and ?? are dependent. For
example, for ?? =2,?? (?? |?? ) is given by
?? (?? |2) =
?? (2,?? )
?? ?? (2)
=1 if ?? =1
=0 otherwise
so ?? (?? |?? )??? ?? (?? ) for ?? =2,?? =1 in particular (and for all other values as well).
By using (3.6) in the form
?? (?? ,?? )=?? ?? (?? )?? (?? |?? ) for all ?? ,?? (3.8)
it is possible to construct a bivariate distribution from two components: either marginal
distribution and the conditional distribution of the other variable given the one whose
marginal distribution is specified. Thus while marginal distributions are themselves
insufficient to build a bivariate distribution, the conditional probability function captures
exactly what additional information is needed.
3.2 Continuous Bivariate Distributions.
The distribution of a pair of continuous random variables ?? and ?? defined on the same
sample space (that is, in reference to the same experiment) is given formally by an
extension of the device used in the univariate case, a density function. If we think of the
pair (?? ,?? ) as a random point in the plane, the bivariate probability density function
?? (?? ,?? ) describes a surface in 3-dimensional space, and the probability that (?? ,?? ) falls in
a region in the plane is given by the volume over that region and under the surface
?? (?? ,?? ) . Since volumes are given as double integrals, the rectangular region with ?? <?? <
?? and ?? <?? <?? has probability
?? (?? <?? <?? and ?? <?? <?? )=? ?
?? ?? ?? ?
?? ?? ??? (?? ,?? )???????? (3.9)
[Figure 3.3]
It will necessarily be true of any bivariate density that
?? (?? ,?? )=0 for all ?? ,?? (3.10)
and
? ?
8
-8
?? ?
8
-8
??? (?? ,?? )???????? =1 (3.11)
that is, the total volume between the surface ?? (?? ,?? ) and the ?? -?? plane is 1 . Also, any
function ?? (?? ,?? ) satisfying (3.10) and (3.11) describes a continuous bivariate probability
distribution.
It can help the intuition to think of a continuous bivariate distribution as a unit mass
resting squarely on the plane, not concentrated as spikes at a few separated points, as in
the discrete case. It is as if the mass is made of a homogeneous substance, and the
function ?? (?? ,?? ) describes the upper surface of the mass.
If we are given a bivariate probability density ?? (?? ,?? ) , then we can, as in the discrete case,
calculate the marginal probability densities of ?? and of ?? ; they are given by
?? ?? (?? )=? ?
8
-8
??? (?? ,?? )???? for all ?? (3.12)
?? ?? (?? )=? ?
8
-8
??? (?? ,?? )???? for all ?? (3.13)
Just as in the discrete case, these give the probability densities of ?? and ?? considered
separately, as continuous univariate random variables.
The relationships (3.12) and (3.13) are rather close analogues to the formulae for the
discrete case, (3.3) and (3.4). They may be justified as follows: for any ?? <?? , the events "
?? <?? =?? " and " ?? <?? =?? and -8<?? <8 " are in fact two ways of describing the
same event. The second of these has probability
? ?
8
-8
?? ?
?? ?? ??? (?? ,?? )???????? =? ?
?? ?? ?? ?
8
-8
??? (?? ,?? )???????? =? ?
?? ?? ?[? ?
8
-8
??? (?? ,?? )???? ]????
We must therefore have
?? (?? <?? =?? )=? ?
?? ?? [? ?
8
-8
??? (?? ,?? )???? ]???? for all ?? <??
and thus ?
-8
8
??? (?? ,?? )???? fulfills the definition of ?? ?? (?? ) (given in Section 1.7): it is a
function of ?? that gives the probabilities of intervals as areas, by integration.
Read More