Lecture 10: Examples of Discrete Time Markov Chain

Lecture 10: Examples of Discrete Time Markov
Chain
Parimal Parag
1
1.1
Discrete Time Markov Chains Contd.
Example 4.3(c) The Age of Renewal Process:
Consider a discret time slotted system. An item is put into use. When the
item fails, it is replaced at the beginning of the next time period. Pi is the
probability of the failure of the item in P
ith time period. Assume that the life
times are independent. Pi is aperiodic.
iPi < ∞. The age of an item is the
number of periods the item is in use. Let us denote the age of the item at time
n as Xn . Let us denote
Pi
λ(i) , P∞
j=i
Pj
λ(i) is the probability that an i unit old item fails. {Xn } is a Markov chain
with transition probability,
λ(i)
1 − λ(i)
P (Xn = in |Xn−1 = i) =
: in = 1
: in = i + 1,
for i ≥ 1. To find the equilibrium distribution of this process we solve for
π = πP .
X
π1 =
πi λ(i)
i
πi+1 =πi (1 − λ(i)).
We can solve the above set of equations
iteratively to obtain πi+1 = π1 (1 −
P∞
λ(1))(1 − λ(2)) . . . (1 − λ(i)) = π1 j=i+1 Pj = π1 P (Y ≥ i + 1), where Y is the
P
P (Y ≥i)
1
life time of an item. Using j πj = 1, we get π1 = E[Y
] and πi = E[Y ] .
1.2
Example 4.3(D) Time slotted system:
Consider a time slotted system and in each slot every member of a population
dies with probability p i.i.d. In each time slot, the number of new members that
join the population is according to a Poisson (λ) process. Let Xn denote the
number of members of the population at the beginning of time period n. Observe
that {Xn } is a Markov chain. We are interested in computing the stationary
1
distribution of the Markov chain. To that end, let X0 be a Poisson random
variable with mean α. Then X0 individuals will be alive at the beginning of slot
1 with probability (1 − p). Since the number of new members in the system will
be a Poisson random variable with parameter λ, X1 is distributed Poisson with
parameter α(1 − p) + λ. If α = α(1 − p) + λ, the chain would be stationary.
Hence by the uniqueness of the stationary distribution is Poisson with mean λp .
1.3
Example 4.3(E)Gibbs sampler:
Gibbs sampler is used to generate values of a random vector X1 , X2 . . . Xn where
it as such difficult to generate values from the mass function PX1 ,X2 ...Xn (.)
The idea is to form a Markov chain whose stationary distribution is given by
PX1 ,X2 ...Xn (.) The Gibbs sampler works as follows: Let X0 = (x01 , x02 . . . x0n )
for which p(x0! , x02 . . . x0n ) > 0. Now generate x1k for k = 1, 2 . . . n according
, . . . xk−1
to PXk |Xj0 =x0j , j6=k (.). Similarly generate Xk given Xk−1 = (xk−1
n ).
1
Observe the Xj , j ≥ 0 is a Markov chain. It is easy to see that PX1 ,X2 ...Xn is
the stationary probability distribution of the Markov chain.
Consider an irreducible positive recurrent Markov chain with stationary
probability {π(i) : i ∈ N0 }. Let N denote the number of transitions between
successive visits to state 0. Visits to state 0 constitutes renewal instants. Then,
the number of visits by time n to state 0, for large n is approximately normally
distributed. i.e.
n
n
nV ar(N )
1 X
1X =0 → N (
,
).
n m=1 m
E[N ] E[N ]3
Note that n/E[N ] = nπ0 and nV ar(N )/(E[N ])3 = nV ar(N )π03 . Since
1
2
V ar(N ) = E[N 2 ] − π+0
2 , we need to find E[N ]. Let Tn be the number of
Pn
transitions from n until next visit to state 0. Note that, n1 m=0 Tm . If N is
the number of transitions between successive visits to state 0, average reward
N (N +1)
2
P
E[
]
]
of the process per unit time= E[N2 ]
= 1 + 21 E[N
i∈N0 π(i)µi,0 . Here
E[N ] =
P 2
µi,0 = E[Tn |Xn = i]. Thus µi,0 = 1 + j6=0 Pi,j µj,0 .
1.4
Transition Among Classes and Mean Times in Transient States
Proposition 1. Let R be a recurrent class of states. If i ∈ R, j 6= R, then
Pij = 0.
Proof. Suppose Pi j > 0. Then, as i and j do not communicate with each other,
n
Pji
= 0, ∀n. Hence, starting from state i, there is a positive probability of at
least Pij that the process will never return to state i. This contradicts the fact
that i is recurrent. Hence, Pij = 0.
Let T denote the set of all transient states. Let j be recurrent, i ∈ T . fij
denote the probability of eventually hitting j starting from i = Pi (Tj < ∞).
Proposition 2. If j is recurrent, then the set of probabilities {fij : i ∈ T }
satisfies
X
X
fij =
Pik fkj +
Pik ,
k∈T
k∈K
2
where R denotes set of states communicating with j.
Proof.
fij =
X
Pi (Ti < ∞, X1 = k)
k
= Pi (Tj < ∞, X1 ∈ T ) + Pi (Tj < ∞, X1 ∈ R)
X
X
=
Pk (Tj < ∞)pik +
pik
k∈T
=
X
k∈R
fkj pik +
k∈T
1.5
X
pik .
k∈R
Gambler’s Ruin Problem
Consider a gambler who at each play of the game has p = P (win) = 1 −
P (loss) = 1 − q. Assume successive plays of the game are independent. We
are interested in knowing the probability that starting with i units the gambler
hits fortune N before hitting 0. Let Xn be the player’s fortune at time n.
{Xn } is a Markov chain. Observe that P00 = 1. PN N = 1. Also observe that
Pi,i+1 = p = 1−Pi,i−1 , i = 2, 3 . . . N −1. Let P (Xn+1 = j|Xn = i). The Markov
chain has three classes {0}, {N }, {1, 2 . . . N − 1}, the first two are recurrent and
the last one is transient. Let fi ≡ fiN . We have, from the previous proposition,
fi = pfi+1 + (1 − p)fi−1 , f0 = 0, fN = 1. Since p + q = 1, we can observe that
fi+1 − fi = pq (fi − fi−1 ). Since f0 = 0, we can write a recursion and observe
that
(
q i
(1−( p
) )
: p 6= 12
q N
1−(
fi =
p)
i
: p = 12 .
N
As N → ∞
fi =
(1 − ( pq )i )
0
: p > 12
: p ≤ 21 .
Consider a finite state Markov chain with transient states with T = {1, hdotst}.
Let Q be the associated transition matrix. Let mij denote the expected total
number of time periods spent if state j stating form state i.
X
mij = E[
1Xn =j |X0 = i]
n∈N
= 1i = j +
X
Pik mkj .
k∈T
[M ]ij = (I)ij + (QM )ij , M = [I − Q]−i . The matrix inverse exists.
3