Mathematics Exam  >  Mathematics Notes  >  Mathematics for IIT JAM, GATE, CSIR NET, UGC NET  >  Markov chains with finite and countable state space, CSIR-NET Mathematical Sciences

Markov chains with finite and countable state space, CSIR-NET Mathematical Sciences | Mathematics for IIT JAM, GATE, CSIR NET, UGC NET PDF Download

Markov Chains Introduction 

1. Consider a discrete time Markov chain{Xi,i = 1,2,...}that takes values on a countable (finite or infinite) set S = {x1,x2,...}, where xi is the i-th state, the state of the stochastic process at time i.

Examples of countable S are: 

  • Zd, d dimensional integers lattice
  • set of all partitions of {1,2,...,n}
  • set of all configurations of magnetic spins (-1 or 1) of a n×n lattice grid (Ising model)
  • set of possible moves of a king on a chess board
  • set of possible route of visitation of n cities (traveling salesman problem)
  • set of all length d vectors of 0’s and 1’s
  • any finite set 

It can be viewed as a stochastic process Xindexed over time i = 1,2,.... It is called a Markov chain if it satisfies the Markov property: the next state Xi+1 depends only on the current state Xi (memoryless property), i.e. 

P(Xi+1 = xj|X= xi,···,X0 = x0)

= P(Xi+1 = xj|X= xi).

The probability Pij = P(Xi+1 = xj|X= xi) is called the transition matrix. Any row of a transition matrix will sum to one, i.e.,Pj Pij = 1 for all i. The initial distribution for Xdetermine the distribution for any n-th state.

2. Random walk. You play a coin tossing game. You win $1 if a head appears and lose $1 if a tail appears. Let Xbe your total gain after i tosses. Assume the coin is unbiased, then the transition matrix is given by

Markov chains with finite and countable state space, CSIR-NET Mathematical Sciences | Mathematics for IIT JAM, GATE, CSIR NET, UGC NET

We can model the coin tossing a simple random walk  Markov chains with finite and countable state space, CSIR-NET Mathematical Sciences | Mathematics for IIT JAM, GATE, CSIR NET, UGC NETMarkov chains with finite and countable state space, CSIR-NET Mathematical Sciences | Mathematics for IIT JAM, GATE, CSIR NET, UGC NET The expected pay off does not change over time, i.e. EXi+1 = EXi.

In this case, Markov chains with finite and countable state space, CSIR-NET Mathematical Sciences | Mathematics for IIT JAM, GATE, CSIR NET, UGC NET Please compute this probability.

3. Chapman-Kolmogorov equation. Let  Markov chains with finite and countable state space, CSIR-NET Mathematical Sciences | Mathematics for IIT JAM, GATE, CSIR NET, UGC NET  be the transition kernel from x to y in n-steps, i.e Markov chains with finite and countable state space, CSIR-NET Mathematical Sciences | Mathematics for IIT JAM, GATE, CSIR NET, UGC NET Then we have

Markov chains with finite and countable state space, CSIR-NET Mathematical Sciences | Mathematics for IIT JAM, GATE, CSIR NET, UGC NET

The the probability of reaching y from x in n-steps is the sum of all probabilities going from x to y through an intermediate point z. 

Let Markov chains with finite and countable state space, CSIR-NET Mathematical Sciences | Mathematics for IIT JAM, GATE, CSIR NET, UGC NETbe a matrix

Then in terms of matrix, Chapman-Kolomogorov equation can be trivially written as Pm+n = PmPn.

4. Example. A rat became insane and moves back and forth between position 1 and 2. Let Xi be the position of the rat at the i-th move. Suppose that the transition probability is given by 

Markov chains with finite and countable state space, CSIR-NET Mathematical Sciences | Mathematics for IIT JAM, GATE, CSIR NET, UGC NET

On a finite state space, a state i is called recurrent if the Markov chain returns to i with probability 1 in a finite number of steps otherwise the state is transient. The recurrent of a state is equivalent to a guarantee of a sure return. Is the state 1 and 2 recurrent? 

Markov chains with finite and countable state space, CSIR-NET Mathematical Sciences | Mathematics for IIT JAM, GATE, CSIR NET, UGC NET

for j = 1,2. So the state 1 and 2 are recurrent.

5. Let us find P(Xn = 1|X0 = 2). Note that this is n step transition probability denoted by Pn 12. It is computed by the Chapman-Kolmogorov equation. For this we need to compute Markov chains with finite and countable state space, CSIR-NET Mathematical Sciences | Mathematics for IIT JAM, GATE, CSIR NET, UGC NET. In general there are theorems to do this. We are glad to do this computationally for now and find out that

Markov chains with finite and countable state space, CSIR-NET Mathematical Sciences | Mathematics for IIT JAM, GATE, CSIR NET, UGC NET

So we can see that the transition probabilities converge: Markov chains with finite and countable state space, CSIR-NET Mathematical Sciences | Mathematics for IIT JAM, GATE, CSIR NET, UGC NET  and Markov chains with finite and countable state space, CSIR-NET Mathematical Sciences | Mathematics for IIT JAM, GATE, CSIR NET, UGC NET such that Markov chains with finite and countable state space, CSIR-NET Mathematical Sciences | Mathematics for IIT JAM, GATE, CSIR NET, UGC NET

 Then π is a stationary distribution for the Markov chain with transition probability P if Markov chains with finite and countable state space, CSIR-NET Mathematical Sciences | Mathematics for IIT JAM, GATE, CSIR NET, UGC NET

An interesting property on πj is that it is the expected number of states a Markov chain should take to return to the original state j. In our crazy rat example, the rat will return to position 2 in average 3 steps if it was at the position 2 initially. 

7. An increased stability for a Markov chain can be achieved if we let X∼ π, i.e. P(X0 = j) = πj. We have X1 ∼ π'P = π'. Similarly X∼ π. 

 Note that π is the eigenvector corresponding to the eigenvalue 1 for the matrix P.

Actually you don’t need to have X∼ π to have a stable chain. If X0 ∼ µ for any probability distribution, µ'Pn → π' as n →∞. 

8. In general, the Markov chain is completely specified in terms of the distribution of the initial state X0 ∼ µ and the transition probability matrix P

9. This is the basis of MCMC. Given probability distribution π, we want to estimate some functions of π, e.g., Eπg(X). The idea is to construct a Markov chain Xand let the chain run a long time (burn in stage) so that it will converge to its stationary distribution π. We then sample from an dependent sample from this stationary distribution and use it to estimate the integral function.

The difference between MCMC and a simple Monte-Carlo method is that MCMC sample are dependent and and simple Monte Carlo sample are independent.

 

Markov Chains Concepts

We want to know: Under what conditions will the chain will converge? What is the stationary distribution it converges to? We will talk about finite state space first, then extend it to infinite countable space

2.1 Markov Chains on Finite S

One way to check for convergence and stationarity is given by the following theorem:
Peron-Frobenius Theorem If there exists a positive integer n such that Markov chains with finite and countable state space, CSIR-NET Mathematical Sciences | Mathematics for IIT JAM, GATE, CSIR NET, UGC NET (i.e.,all the elements of Pn are strictly positive), then the chain will converge to the stationary distribution π.

Although the condition required in Peron-Frobenious theorem is easy to state, sometimes it is difficult to establish for a given transition matrix P. Therefore, we seek another way to characterize the convergence of a Markov chain with conditions that may be easier to check.

For finite state space, the transition matrix needs to satisfy only two properties for the Markov chain to converge. They are: ireducibility and aperiodicity.

Irreducibility implies that it is possible to visit from any state to any state in a finite number of steps. In other words, the chain is able to visit the entire S from any starting point X0.

Aperiodicity implies that the Markov chain does not cycle around in the states with a finite period. The problem with periodicity is limiting the chain to return to certain states at some constant time intervals. Some definitions:
 

1. Two states are said to communicate if it is possible to go from one to another in a finite number of steps. In other words, xi and xare said to communicate if there exists a positive integer such that  Markov chains with finite and countable state space, CSIR-NET Mathematical Sciences | Mathematics for IIT JAM, GATE, CSIR NET, UGC NET

2. A Markov chain is said to be irreducible if all states communicate with each other for the corresponding transition matrix. For the above example, the Markov chain resulting from the first transition matrix will be irreducible while the chain resulting from the second matrix will be reducible into two clusters: one including states x1 and x2, and the other including the states x3 and x4

3. For a state xi and a given transition matrix Pij, define the period of xi as d(xi) = Markov chains with finite and countable state space, CSIR-NET Mathematical Sciences | Mathematics for IIT JAM, GATE, CSIR NET, UGC NET

4. If two states communicate, then it can proved that their periods are the same. Thus, in an irreducible Markov chain where all the states communicate, they all have the same period. This is called the period of an irreducible Markov chain.

5. An irreducible Markov chain is called aperiodic if its period is one.

6. (Theorem) For a irreducible and aperiodic Markov chain on a finite state space, it can be shown that the chain will converge to a stationary distribution. If we let the chain run for long time, then the chain can be view as a sample from its (unique) stationary probability distribution. These two conditions also imply: that all the entries of Markov chains with finite and countable state space, CSIR-NET Mathematical Sciences | Mathematics for IIT JAM, GATE, CSIR NET, UGC NET are positive for some positive integer n. The requirement needed for the Peron-Frobenius Theorem.

 

2.2 Markov Chains on Infinite but countable S

1. In case of infinite but countable state space, the Markov chain convergence requires an additional concept — positive recurrence — to ensure that the chain has a unique stationary probability.

2. The state xis recurrent iff P(the chain starting from xi returns to xi infinitely often) = 1. A state is said to be transient if it is not recurrent.

3. It can be shown that a state is recurrent iff 

Markov chains with finite and countable state space, CSIR-NET Mathematical Sciences | Mathematics for IIT JAM, GATE, CSIR NET, UGC NET

4. Another way to look at recurrent/transient: A state i is said to be transient if, given that we start in state i, there is a non-zero probability that we will never return back to i. Formally, let the random variable Ti be the next return time to state i

Ti = min{n : Xn = i|X0 = i)

Then, state i is transient if Ti is not finite with some probability, i.e., P(Ti < ∞) < 1. If a state i is not transient, i.e., P(T< ∞) = 1 (it has finite hitting time with probability 1), then it is said to be recurrent. Although the hitting time is finite, it need not have a finite average. Let Mi be the expected (average) return time, state i is said to be positive recurrent if Mi is finite; otherwise, state i is null recurrent.

5. If two states xi and xcommunicate, and xi is (positive) recurrent, then xj is also (positive) recurrent.

6. (Theorem) For a irreducible, aperiodic, and positive recurrent Markov chain on a countably infinite state space, it can be shown that the chain will converge to a stationary distribution. This chain is said to be ergodic.

The document Markov chains with finite and countable state space, CSIR-NET Mathematical Sciences | Mathematics for IIT JAM, GATE, CSIR NET, UGC NET is a part of the Mathematics Course Mathematics for IIT JAM, GATE, CSIR NET, UGC NET.
All you need of Mathematics at this link: Mathematics
556 videos|198 docs

FAQs on Markov chains with finite and countable state space, CSIR-NET Mathematical Sciences - Mathematics for IIT JAM, GATE, CSIR NET, UGC NET

1. What is a Markov chain?
Ans. A Markov chain is a mathematical model that describes a sequence of events where the probability of transitioning from one state to another depends only on the current state. It is characterized by a set of states, a transition probability matrix, and an initial state.
2. What is the difference between a finite and countable state space in Markov chains?
Ans. In a Markov chain, a finite state space refers to a limited number of distinct states, while a countable state space allows for an infinite number of states. The transition probabilities between states are represented by matrices in both cases, but the dimensions of the matrix differ based on the size of the state space.
3. How are Markov chains useful in the field of Mathematical Sciences?
Ans. Markov chains have a wide range of applications in Mathematical Sciences. They are used to model various real-life scenarios such as weather patterns, stock market fluctuations, population dynamics, and customer behavior in business. They provide a mathematical framework to understand and predict the behavior of systems that exhibit random transitions between states.
4. What are the key components of a Markov chain?
Ans. The key components of a Markov chain include a set of states, a transition probability matrix, and an initial state. The set of states represents the possible states that the system can be in. The transition probability matrix defines the probabilities of transitioning from one state to another. The initial state represents the starting point of the Markov chain.
5. How can one analyze the behavior of a Markov chain with a finite or countable state space?
Ans. The behavior of a Markov chain with a finite or countable state space can be analyzed by studying its stationary distribution, ergodicity, and long-term behavior. The stationary distribution represents the long-term probabilities of being in each state. Ergodicity refers to the property of the Markov chain where all states can be reached from any other state. Various mathematical techniques, such as matrix algebra and eigenvalue analysis, can be used to analyze these properties of a Markov chain.
556 videos|198 docs
Download as PDF
Explore Courses for Mathematics exam
Signup for Free!
Signup to see your scores go up within 7 days! Learn & Practice with 1000+ FREE Notes, Videos & Tests.
10M+ students study on EduRev
Related Searches

past year papers

,

practice quizzes

,

video lectures

,

Exam

,

shortcuts and tricks

,

Important questions

,

CSIR-NET Mathematical Sciences | Mathematics for IIT JAM

,

CSIR NET

,

Extra Questions

,

Previous Year Questions with Solutions

,

UGC NET

,

CSIR NET

,

GATE

,

Markov chains with finite and countable state space

,

Markov chains with finite and countable state space

,

Semester Notes

,

Sample Paper

,

GATE

,

Markov chains with finite and countable state space

,

Free

,

Objective type Questions

,

UGC NET

,

Viva Questions

,

mock tests for examination

,

Summary

,

study material

,

pdf

,

ppt

,

GATE

,

CSIR-NET Mathematical Sciences | Mathematics for IIT JAM

,

MCQs

,

CSIR-NET Mathematical Sciences | Mathematics for IIT JAM

,

UGC NET

,

CSIR NET

;