Solutions of homework 2

Math 5652: Introduction to Stochastic Processes
Homework 2 solutions
(1) (20 points) Let Z1 , Z2 , Z3 , ... be independent, identically distributed Bernoulli(p) random variables: that is, P(Zi = 1) = p and P(Zi = 0) = 1 − p. Set S0 = 0 and
Sn = Z1 + Z2 + . . . + Zn . In each of the following cases, determine whether (Xn )n≥0
is a Markov chain. If it is, find the state space and transition probability matrix, and
determine which states are recurrent and which are transient. If it isn’t, give an example
where
P(Xn+1 = j|Xn = i, Xn−1 = k)
is not independent of k. (In all examples that aren’t Markov chains, you can find such
an example with just a two-state history.)
(a) Xn = Zn
(b) Xn = Sn
(c) Xn = S0 + . . . + Sn
(d) Xn = (Sn , S0 + . . . + Sn )
Solution:
(a) This is a Markov chain with two states 0 and 1,
P(Xn+1 = 1|Xn = i, Xn−1 = xn−1 , . . . , X0 = x0 ) = p
P(Xn+1 = 0|Xn = i, Xn−1 = xn−1 , . . . , X0 = x0 ) = 1 − p.
Because the transition probabilities do not depend on the values of x0 , . . . , xn−1 , this
is a Markov chain; because they don’t depend on n, it’s time-homogeneous. (They
happen to also not depend on i; this is ok.) The transition probability matrix is
0
1
0
1
1−p p
.
1−p p
Both states are recurrent, unless p = 0 (then 0 is recurrent and 1 is transient) or
p = 1 (then 0 is transient and 1 is recurrent).
(b) This is a Markov chain with state space N = {0, 1, 2, 3, . . . }. The transition probabilities are
P(Xn+1 = j|Xn = i, Xn−1 = xn−1 , . . . , X0 = x0 )
= P(Zn+1 = j − i|something about Z1 , . . . , Zn )


j−i=1
p,
= 1 − p, j − i = 0

0,
otherwise
Because the transition probability doesn’t depend on xn−1 , . . . , x0 , this is a Markov
chain. Because the transition probability doesn’t depend on n, it’s time-homogeneous.
1
The transition probability matrix is
0
1
2
3
4
0 1−p
p
0
0
0

1
0
1−p
p
0
0

2 0
0
1−p
p
0

3 0
0
0
1−p
p

4 0
0
0
0
1−p
..
..
..
..
..
..
.
.
.
.
.
.

...

...
... 

... 

... 

... 
..
.
All states are transient, because from state n you can get to state n + 1, from which
it will be impossible to ever go back to n. (Unless p = 0, then you always stay in
the state you started in.)
(c) This is not a Markov chain. Here’s an example of where the transition probability
depends on history:
P(X17 = 10|X16 = 9, X15 = 8) = P(S17 = 1| S1 + . . . + S16 = 9, S1 + . . . + S15 = 8)
{z
}
|
so S16 =1
= P(Z17 = 0|history) = 1 − p
but
P(X17 = 10|X16 = 9, X15 = 7) = P(S17 = 1| S1 + . . . + S16 = 9, S1 + . . . + S15 = 7)
{z
}
|
so S16 =2
= 0 because S17 must be at least as big as S16 .
(If p = 1 so that 1 − p = 0, this will actually be a Markov chain. It will also be a
Markov chain if p = 0.)
Of course, various other counterexamples are possible here. The main thing to
realize is that to translate the statement Xn+1 = j, Xn = i into a statement about
the random variable Zn+1 , I actually need to know more of the history of the process:
I need to know Xn−1 as well.
(d) This process is a Markov chain. The transition probabilities are
P(Xn+1 = (c, d)|Xn = (a, b), Xn−1 = xn−1 , . . . , X0 = x0 )
= P(Zn+1 = c − a and d = b + c|something about Z1 , . . . , Zn )


c = a + 1 and d = b + c = b + a + 1
p,
= 1 − p, c = a and d = b + c = b + a

0,
otherwise.
Equivalently, from state (a, b) I can go to state (a, b + a) with probability 1 − p or
to state (a + 1, b + a + 1) with probability p.
It’s a little tricky to persuade this into a transition probability matrix, because
states are indexed by pairs of numbers; but here’s one way of doing it. I’m listing
2
states with sum of the two numbers equal to 0, then to 1, then to 2, then to 3, and
so on.
(0, 0) (1, 0) (0, 1) (2, 0) (1, 1) (0, 2) (3, 0) (2, 1) (1, 2) (0, 3) . . .

(0, 0) 1 − p
0
0
0
p
0
0
0
0
0
...
(1, 0) 0
0
0
0
1−p
0
0
0
p
0
... 


(0, 1) 0
0
1−p
0
0
0
0
0
p
0
... 
..
..
..
..
..
..
..
..
..
.
.
.
.
.
.
.
.
.

All states are transient, since from (a, b) I can go to (a + 1, a + b + 1) and then the
first coordinate will never decrease again.
Note that there are multiple state spaces possible here: I could look at N2 = all
pairs of nonnegative integers, or {(a, b) : b ≥ a} (since the second component is
always at least as great as the first), or I could try to classify all the states I can
actually get to from state 0. It’s fine to have extra states, no harm comes of it, so
I’ve just taken the state space to be N2 .
(2) (10 points) In the small, one-player version of the game “Snakes and ladders” illustrated
below, the turn starts with a player tossing a coin. If it lands heads, the player advances
2 spots. If it lands tails, the player advances 1 spot. If the player finds himself on the
bottom square of a ladder, he climbs to the top square of the ladder, and the turn ends.
If the player finds himseld on the top square of a snake, he slides to the bottom square of
the snake, and the turn ends. The game ends when the player reaches the square marked
“Finish”. Compute the transition probability matrix for this game. Find the probability
that the player finishes the game in at most four turns. (This computation is best done
on the computer. Report your answer either exactly, or with 4 decimal places.)
Finish
7
8
9
6
5
4
1
2
3
Solution: I am assuming the player starts in square 1. (Some people have assumed that
the player starts off the board; this is fine, but your calculation will be slightly different.)
Naively, the state space would have 9 states, but we can reduce it to 5 by noticing that
some squares are impossible to end up at: the player can’t end a turn at the foot of the
3
ladder, or at the head of the snake. Thus, the transition matrix is actually
 1
4
5
7
1/2 1/2
1/2
1/2
1
4
 1/2
p = 5
 1/2
7
1/2
9
9 




1/2 
1
where all entries without numbers in them contain 0. State 9 is an absorbing state: once
you enter it, you will never leave.
The probability of finishing in at most 4 turns is the probability that by turn 4 the
player has reached state 9. Since once you enter it, you never leave, this is also the
probability of being in state 9 at time 4 (if you’ve arrived earlier, you’ll still be there).
So the probability we want is
p4 (1, 9) =
7
= 0.4375.
16
(As it turns out, the answer to four decimal places is exact.)
If you decided to start your player off the board (so that on the first turn she can land
either on 1 or on 7), your answer (in terms of the probability matrix above) should be
1 3
1
7
p (1, 9) + p3 (7, 9) = ,
2
2
16
so actually the same.
(3) (10 points) Consider the following model of cell splitting and death. At each time n, each
living cell in the population either splits into two cells with probability p, or dies with
probability 1 − p. The splitting of different cells is independent. Let Zn be the number
of living cells at time n. Assume Z0 = 2. Prove that (Zn )n≥0 is a time-homogeneous
Markov chain, determine its state space and transition probability matrix, and find the
stationary distribution. What can you say about the recurrence or transience of the
states?
Solution: Start by noticing that we begin with an even number of cells, and that after
dying or splitting we’ll keep having an even number of cells (since any dead cell turns
into 0 cells and any splitting cell turns into 2, both even numbers). Consequently, my
state space will be the even numbers: {0, 2, 4, 6, . . . }.
Let’s compute the transition probabilities:
P(Zn+1 = 2k|Zn = 2l, Zn−1 = zn−1 , . . . , Z0 = z0 ) = P(k cells split, l − k cells died|history)
l k
=
p (1 − p)l−k if 0 ≤ k ≤ l, and otherwise probability is 0.
k
Since the transition probabilities don’t depend on the history, this is a time-homogeneous
Markov chain.
4
In matrix form,
0
0
1
2 (1 − p)2

4 (1 − p)4

6 (1 − p)6
..
..
.
.
2
4
6
8
2p(1 − p)
4p(1 − p)3
6p(1 − p)5
..
.
p2
6p2 (1 − p)2
15p2 (1 − p)4
..
.
4p3 (1 − p)
20p3 (1 − p)3
..
.
p4
15p (1 − p)2
..
.

4
...

...
... 

... 

... 
Notice that all the states other than 0 are transient, because it is possible to go from
them to 0, and from 0 you can never return. The stationary distribution for a transient
state is always 0, so for the stationary distribution we have
π(x) = 0 if x 6= 0.
Since the sum of all values of π should be 1, we get π(0) = 1, and we can check that this
is stationary: if you start in state 0, you stay there.
This is an example of a chain where the long-term behaviour might not be described
too well by the stationary distribution, because it’s possible for the cell population to
explode and then never die out. In particular, the long-term proportion of time spent
in state 0 might not be equal to 1. The reason this is happening is that on an infinite
state space, we need to check recurrence of the Markov chain before concluding that π
describes the long-term proportion of time spent in a state.
(4) (10 points) The NiceRide network works as follows: you can pick up a bike (for a small
fee) at different locations, and drop it off at any of the other locations. Let’s make a
small model of the network, with one stop at the University (U), one stop at the Guthrie
theater (G), and one stop at Fort Snelling (F). A long-term study has determined that
bikes move according to the following Markov chain:
U
G F

U 0.5 0.2 0.3
G 0.8 0.1 0.1 
F 0.3 0.1 0.6

On Sunday, the proportions of bikes in each of the three places are the same. What are
the fractions of bikes in each of the locations
(a) On Tuesday?
(b) The following Sunday?
(c) If the company doesn’t move the bikes around, what will be the long-run fractions
of bikes in the three locations?
Solution: We are given the initial distribution of the bikes as (1/3, 1/3, 1/3). Thus, the
answers are
(a)
(1/3, 1/3, 1/3)p2 ≈ (0.473, 0.153, 0.373)
5
(b)
(1/3, 1/3, 1/3)p7 ≈ (0.46672, 0.14668, 0.38659)
(c) This is an irreducible, aperiodic finite-state Markov chain, so


π(U ) π(G) π(F )
lim (1/3, 1/3, 1/3)pn = (1/3, 1/3, 1/3) π(U ) π(G) π(F ) = (π(U ), π(G), π(F )).
n→∞
π(U ) π(G) π(F )
The stationary distribution is
35 11 29
, , ) ≈ (0.467, 0.147, 0.387).
75 75 75
You could find it on a computer as the eigenvector of the transition matrix, or by
solving the system of equations
π=(
π(U ) = 0.5π(U ) + 0.8π(G) + 0.3π(F )
π(G) = 0.2π(U ) + 0.1π(G) + 0.2π(F )
π(U ) + π(G) + π(F ) = 1.
(5) (10 points) For a brand, “consumer loyalty” is defined as the probability that a consumer
who bought a product of the given brand will buy it again the next time. (Only the
most recent purchase counts.) Suppose brands A and B have consumer loyalties of 0.7
and 0.8 respectively. What is the limiting market share for the two brands? That is,
what is the long-range proportion of consumers who buy brand A and of those who buy
brand B?
Suppose a third brand C enters the market, and has consumer loyalty 0.9. Suppose also
that whenever a consumer switches brands, they pick one of the other brands at random.
What is the new limiting market share for the three products?
Solution: Let’s model the purchases as a Markov chain with two states, A and B (or
with three states A, B, C). The transition probabilities are
p=
A
B
A B
0.7 0.3
0.2 0.8
A
B
C

A 0.7 0.15 0.15
or p = B  0.1 0.8 0.1 
C 0.05 0.05 0.9

The limiting market share of the product is the long-term proportion of time people buy
it. Since our Markov chain is irreducible, this is equal to the stationary distribution
associated to the state, so the limiting market shares are
2 3
(π(A), π(B)) = ( , ) = (0.4, 0.6) or
5 5
2 3 6
(π(A), π(B), π(C) = ( , , ) ≈ (0.182, 0.273, 0.545)
11 11 11
Product C has hurt the first two products quite a lot: their market shares dropped by a
factor of more than 2.
6
(6) (10 points) A fair die is thrown repeatedly. Let Xn be the sum of the first n rolls (with
X0 = 0). Find
lim P(Xn is a multiple of 5).
n→∞
Can you still find the answer if “multiple of 5” is replaced by “multiple of 13”?
Solution: We will model the problem with a five-state Markov chain. One of the states
will be “multiples of 5”. The other states will be the other remainders that are possible
when you divide by 5. That is, we will have states
0 = {0, 5, 10, 15, 20, . . . }
1 = {1, 6, 11, 16, 21, . . . }
2 = {2, 7, 12, 17, 22, . . . }
3 = {3, 8, 13, 18, 23, . . . }
4 = {4, 9, 14, 19, 24, . . . }
The transition probability matrix is
0
0 1
1 1
1 
1
p = 2
6 3
1
4 2
1
2
1
1
1
1
2
1
2
1
1
1
3
1
1
2
1
1
4
1
1

1

2
1
For example, to go from state 2 to state 3 you need to roll 1 or 6, so that has probability
2/6. To go from 2 to 0 (for example, 2 to 5) you need to roll a 3, which has probability
1/6.
This is an irreducible, aperiodic Markov chain, so the limiting probability is equal to
π(0). By solving systems of equations, or by computer, or by a lucky guess, the stationary
distribution here is uniform:
1
π = (1, 1, 1, 1, 1).
5
We can check this: we know the stationary distribution is unique, so it’s enough to check
that πp = π. Notice that
1
1
· sum of the numbers in the first column =
5
5
1
1
(π · p)2 = · sum of the numbers in the second column =
5
5
1
1
(π · p)3 = · sum of the numbers in the third column =
5
5
...
(π · p)1 =
7
so this works because the sum of the numbers in all the columns of p is equal to 1. (This
means that p is a doubly stochastic matrix.) Consequently,
1
lim P(Xn is a multiple of 5) = π(0) = .
n→∞
5
With multiples of 13, the situation is very similar. The Markov chain now has 13 states
(remainders when you divide by 13), and the transition probabilities are 1/6 to each of
the next 6 states (wrapping around as necessary). Notice that for every state n, you
can transition into n from 6 other states: n − 1, n − 2, ..., n − 6 (wrapping around as
necessary). Consequently, the transition probability matrix will again have columns that
add up to 1, and the uniform distribution will be stationary. The Markov chain is still
irreducible and aperiodic: you can’t get from a state to itself in 1 turn any longer, but
you can do it in 3 turns (6+6+1, say) or in 4 turns (4+4+4+1), so the period of every
state is equal to 1. Consequently,
1
lim P(Xn is a multiple of 13) = π(0out of 13 states) = .
n→∞
13
(7) (10 points) A professor has N umbrellas, which she keeps either at home or in her office.
She travels between home and office, and on each of the journeys, it rains with probability
p and is sunny with probability 1 − p, independently of the past state of the weather.
She takes an umbrella with her only if it is raining when she sets off (and, of course, only
if there is one in the place she’s starting from). In the long run, what is the proportion
of trips on which the professor gets wet?
Hint: your state space should keep track not only of the number of umbrellas at one of
the locations, but also of where the professor is starting from; so you’ll have 2(N + 1)
states.
Solution: I know of three different ways of keeping track of the states in this system:
• Xn is the number of umbrellas in the place where the professor is starting from at
time n. You transition from k umbrellas to N − k if it’s sunny (or if k = 0), or to
N − k + 1 if it’s raining (and k 6= 0). There are only N + 1 states.
• Xn keeps track of both the location where the professor is, and the number of
umbrellas there. You transition from, e.g., k umbrellas at home to N − k umbrellas
at work (if sunny) or to N − k + 1 umbrellas at work (if raining).
• Xn keeps track of both the location where the professor is, and the number of
umbrellas she has at home. You transition from, e.g., “professor is at home with
k umbrellas” to “professor is at work, and number of umbrellas at home is k” (if
sunny) or “professor is at work, and number of umbrellas at home is k − 1” (if
raining).
I’ll deal with the second scenario in detail, although they are all perfectly good ways of
dealing with the problem (and should give the same answer).
Let’s call the states H0 , H1 , . . . , HN and W0 , W1 , . . . , WN . Here H and W stand for
“home” and “work” (the location the professor is setting off from), and the subscript
8
gives the number of umbrellas in that location when she’s setting off. The transition
diagram for the chain looks like this:
H0
H1
p
1−p
1
H2
1−p
p
p
1−p
p
1−p
1−p
p
1−p
WN
HN
...
p
WN −1
WN −2
...
1
W0
There isn’t a huge amount of point to writing out the transitions as a matrix, so I won’t.
To find the long-term proportion of time the professor gets wet, notice that this happens
when she’s setting off from H0 or W0 , and it happens to be raining. Since weather is
independent of the number of umbrellas, we need to find
proportion of time wet = (proportion of time in H0 or W0 ) × (p = probability of rain).
Since our Markov chain is irreducible and finite-state, the proportion of time spent in a
state is the stationary probability of that state:
proportion of time wet = (π(H0 ) + π(W0 )) × p.
It remains to find the stationary distribution π. We’ll do it by trying to solve the detailed
balance equations, π(x)p(x, y) = π(y)p(y, x):
π(H0 ) · 1 = π(WN ) · (1 − p)
π(WN ) · p = π(H1 ) · p =⇒ π(WN ) = π(H1 )
π(H1 ) · (1 − p) = π(WN −1 ) · (1 − p) =⇒ π(H1 ) = π(WN −1 )
...
π(W1 ) · p = π(HN ) · p =⇒ π(W1 ) = π(HN )
π(HN ) · (1 − p) = π(W0 ) · 1
Staring at these, we see that
π(H1 ) = π(H2 ) = . . . = π(HN ) = π(WN ) = π(WN −1 ) = . . . = π(W1 ).
Let’s call this common value x. In addition,
π(H0 ) = x · (1 − p) = π(W0 ).
Because the invariant distribution must add up to 1, we know
1
1−p
x(2N + 2(1 − p)) = 1 =⇒ x =
=⇒ π(H0 ) = π(W0 ) =
2N + 2(1 − p)
2N + 2(1 − p)
and finally,
p(1 − p)
proportion of time wet = (π(H0 ) + π(W0 )) × p =
.
N +1−p
9