Math 180C, Spring 2015 Homework 4 Solutions Ex. 7.1.2. We have

Math 180C, Spring 2015
Homework 4 Solutions
Ex. 7.1.2. We have
Z
t
F (t − s)f (s) ds
F2 (t) =
0
Z
t
(1 − e−λ(t−s) )λe−λs ds
=
0
Z
=
t
(λe−λs − λe−λt ) ds
0
= 1 − e−λt − λte−λt .
Therefore,
P[N (t) = 1] = P[W1 ≤ t < W2 ] = P[W1 ≤ t] − P[W2 ≤ t]
= 1 − e−λt − 1 + e−λt + λte−λt
= λte−λt .
Ex. 7.1.3. (a) True. By complements this is the same as : N (t) ≥ k if and only if Wk ≤ t.
(b) False. N (t) ≤ k if and only if N (t) < k + 1 if and only if Wk+1 > t. Thus, if
Wk < t < Wk+1 (a distinct possibility) the assertion fails.
(c) False. Take complements in (b).
Pr. 7.1.1. The first equality follows, as in class, because δt ≥ x and γt > y if and only if
there are no renewals in the time interval (t − x, t + y]. The second equality results from
the law of total probability—k is running through the possible values of N (t − x). That is
P[N (t − x) = N (t + y)] =
∞
X
P[N (t − x) = k = N (t + y)].
k=0
The k th term in the above sum is
P[N (t − x) = k = N (t + y)] = P[Wk ≤ t − x, Wk+1 > t + y].
For the third equality, split off the k = 0 term by itself, and condition on the value of Wk
in each of the remaining terms. Then, for k ≥ 1,
Z t−x
P[Wk ≤ t − x, Wk+1 > t + y] =
P[Wk+1 − Wk > t + y − s]fk (s) ds.
0
Z t−x
[1 − F (t + y − s)]fk (s) ds.
=
0
1
When the inter-occurrence times are exponential, the density fk is the gamma density as
noted in the text. In this case we can sum the series:
∞ Z
X
k=1
t−x
0
#
∞
X
λk sk−1 −λs
[1 − F (t + y − s)]fk (s) ds =
[1 − F (t + y − s)]
e
ds
(k − 1)!
0
k=1


Z t−x
∞
j j
X
λ s 
ds
=
[1 − F (t + y − s)]λe−λs 
j!
0
j=0
Z t−x
=
[1 − F (t + y − s)]λe−λs eλs ds
0
Z t−x
=
λe−λ(t+y−s) ds
Z
=e
"
t−x
0
−λ(x+y)
− e−λ(t+y) .
Adding this to the k = 0 term we obtain
P[δt ≥ x, γt > y] = e−λ(x+y) ,
provided t > x.
Pr. 7.1.3. Because γt = WN (t)+1 − t,
E[γt ] = E[WN (t)+1 ] − t = µ[M (t) + 1] − t.
Ex. 7.2.2.
The separate components A and B of the system behave like independent
Markov chains XA (t) and XB (t) with state space {0, 1} and infinitesimal matrices
αA
1 − βA
αB
.
1 − βB
1 − αA
AA =
βA
and
1 − αB
AB =
βB
The pair X(t) = (XA (t), XB (t)) is then a Markov chain with state space
{(0, 0), (0, 1), (1, 0), (1, 1)},
and the system renews itself when X(t) moves from one of the states (1, 0), (0, 1) to the
state (0, 0). By the continuous time analog of example (f) on page 354, the times of return
to state (0, 0) form a renewal process.
2
Ex. 7.3.1. Since WN (t)+1 is the time of the first renewal after time t,
P[WN (t)+1 > t + s] = P[N (t, t + s] = 0] = e−λs .
In particular,
Z
∞
Z
∞
P[WN (t)+1 − t > s] ds =
E[WN (t)+1 − t] =
Z
=
0
P[WN (t)+1 > t + s] ds
0
0
∞
e−λs ds =
1
.
λ
Therefore,
E[WN (t)+1 ] = t +
1
.
λ
Ex. 7.3.3. Notice that
{N (t) = n, WN (t)+1 > t + s} = {N (t) = n, Wn+1 > t + s} = {N (t) = n, N (t, t + s] = 0}.
Therefore
P[N (t) = n, WN (t)+1 > t + s] = P[N (t) = n, N (t, t + s] = 0]
= P[N (t) = n] · P[N (t, t + s] = 0]
= P[N (t) = n]e−λs .
The second equality in this calculation follows because, for a Poisson process, the counts
in disjoint intervals are independent random variables. We know from exercise 7.3.1 that
P[WN (t)+1 > t + s] = e−λs .
Combining this with the earlier calculation we find that
P[N (t) = n, WN (t)+1 > t + s] = P[N (t) = n] · P[WN (t)+1 > t + s],
for all n = 0, 1, 2, . . . and all s > 0. This proves that N (t) and WN (t)+1 are independent
random variables.
Pr. 7.3.2. We found in exercise 7.3.1 that the left side of the stated identity is equal to
t + λ−1 . Meanwhile, on the right side, we have
E[X1 ] = λ−1
and
E[N (t) + 1] = λt + 1,
because N (t) is a Poisson process of rate λ. Thus the right side is equal to
λ−1 · [λt + 1] = t + λ−1 ,
which confirms the identity.
3