Midterm Study Guide

MAT 3379 - Winter 2015
Introduction to Time Series Analysis
Study Guide for Midterm
THIS IS THE FINAL VERSION !!!
1
Topics
1. Evaluate covariance function in simple models - see Q1 in Assignment 1;
2. Check if a process is causal and stationary - see Q4 in Assignment 1;
3. Derive linear representation and/or compute autocovariance for AR(1),
AR(2), ARMA(1, q), ARMA(2, q) - Q5 in Assignment 1; Examples 4.5,
4.6, 4.11, 4.12 in Lecture Notes; IMPORTANT: ”Derive” means that you
have to start with the definition of the model and end up with the final
formula, cf. Example 4.12 in Lecture Notes
4. ARMA Model identification from graphs of ACF and PACF - see Q6 in
Assignment 1;
5. Derivation of the best linear predictor for AR(1) and AR(2) using YuleWalker procedure. Calculation of MSPE - Q1 in Assignment 2, Example
5.1 in Lecture Notes; IMPORTANT: ”Derive” means that you have to
start with the definition of the model and end up with the final formula,
as I did in Section 5.1 in Lecture Notes in a general case
6. Find the best linear predictor for AR(1) and AR(2) using Yule-Walker
procedure. IMPORTANT: ”Find” means that you can use Eqs. (9) and
(10) in Lecture Notes, as I did in Example 5.1
7. Find the best linear predictor for MA(1), MA(2), ARMA(1,1) and ARMA(1,2)
models using Yule-Walker procedure (for small n) - Q3, Q4 in Assignment
2; IMPORTANT: ”Find” means that you can use Eqs. (9) and (10)
8. Simple calculations with the Durbin-Levinson algorithm. PACF - Q2, Q3
in Assignment 2. Example 5.3 in Lecture Notes.
9. Derive the formulas for Yule-Walker estimators in AR(1), AR(2) and
ARMA(1,1) models. IMPORTANT: ”Derive” means that you have to
start with the definition of the model and end up with the final formula,
as I did in Section 6.2 in Lecture Notes - Q7b) in Assignment 2.
10. Practical question on estimation/prediction: given output, estimate parameters of AR(1) or AR(2) models and do prediction - Q8, Assignment
2. You do not need to derive the formulas, you can use them.
2
2.1
2.1.1
Details
Derive linear representation
AR(1)
[Example 4.5 in Lecture Notes]
AR(1) is defined by
φ(B)Xt = Zt ,
(1)
where φ(z) = 1 − φz. Define
χ(z) =
1
.
φ(z)
Now, the function 1/(1 − φz) has the following power series expansion:
∞
χ(z) =
X
1
=
φj z j .
φ(z) j=0
This expansion makes sense whenever |φ| < 1. Take equation (18) and multiply
both sides by χ(B):
χ(B)φ(B)Xt = χ(B)Zt ,
Xt = χ(B)Zt ,
since χ(z)φ(z) = 1 for all z. That is,
Xt = χ(B)Zt =
∞
X
φj B j Z t =
∞
X
φj Zt−j .
j=0
j=0
The above formula gives a linear representation for AR(1) with ψj = φj . We
note that the above computation makes sense whenever |φ| < 1.
2.1.2
ARMA(1,1)
[Example 4.6 in Lecture Notes]
ARMA(1,1) is defined by
φ(B)Xt = θ(B)Zt ,
(2)
where φ(z) = 1 − φz, θ(z) = 1 + θz. Define
∞
χ(z) =
X
1
=
φj z j .
φ(z) j=0
Take equation (3) and multiply both sides by χ(B):
χ(B)φ(B)Xt = χ(B)θ(B)Zt ,
Xt = χ(B)θ(B)Zt ,
since χ(z)φ(z) = 1 for all z. That is,
Xt = χ(B)θ(B)Zt =
∞
X
φj B j (1 + θB)Zt =
j=0
∞
X
φj Zt−j + θ
j=0
∞
X
φj Zt−j−1 .
j=0
Until now everything
was almost the same as for AR(1). Now, we want Xt to
P∞
have a form j=0 ψj Zt−j . That is
∞
X
ψj Zt−j =
j=0
∞
X
φj Zt−j + θ
j=0
∞
X
φj Zt−j−1 .
j=0
Re-write it as
ψ0 Zt +
∞
X
j=1
ψj Zt−j = φ0 Zt +
∞
X
j=1
(φj + θφj−1 )Zt−j .
We can identify coefficients as
ψj = φj−1 (θ + φ),
ψ0 = 1,
j ≥ 1.
The above formula gives a linear representation for ARMA(1,1). The formula
is obtained under the condition that |φ| < 1. Furthermore, it is also assumed
that θ + φ 6= 0, otherwise Xt = Zt .
2.1.3
ARMA(1,2)
[Assignment 1, Q5c)]
We derive a linear representation for ARMA(1,2), You may follow the steps
of Example 4.6 in the Lecture Notes.
ARMA(1,2) is given by
φ(B)Xt = θ(B)Zt ,
(3)
where φ(z) = 1 − φz, θ(z) = 1 + θ1 z + θ2 z 2 . Define
∞
X
1
χ(z) =
=
φj z j .
φ(z) j=0
Take equation (3) and multiply both sides by χ(B):
χ(B)φ(B)Xt = χ(B)θ(B)Zt ,
Xt = χ(B)θ(B)Zt ,
since χ(z)φ(z) = 1 for all z. That is,
Xt = χ(B)θ(B)Zt =
∞
X
φj B j (1+θ1 B+θ2 B 2 )Zt =
j=0
ψj Zt−j =
j=0
∞
X
φj Zt−j +θ1
j=0
Now, we want Xt to have a form
∞
X
∞
X
P∞
j=0
φj Zt−j + θ1
j=0
∞
X
φj Zt−j−1 +θ2
j=0
j=0
ψj Zt−j . That is
∞
X
φj Zt−j−1 + θ2
j=0
∞
X
φj Zt−j−2 .
j=0
Re-write it as
ψ0 Zt +ψ1 Zt−1 +
∞
X
ψj Zt−j = φ0 Zt +(φ1 +θ1 )Zt−1 +
j=2
∞
X
(φj +θ1 φj−1 +θ2 φj−2 )Zt−j .
j=2
We can identify coefficients as
ψ0 = 1,
ψ1 = φ + θ1 1 ,
ψj = φj−2 (φ2 + θ1 φ + θ2 ),
j ≥ 2.
(4)
The above formula gives a linear representation for ARMA(1,1). The formula
is obtained under the condition that |φ| < 1.
2.1.4
ARMA(1,q)
The same idea as above.
1 There
was a typo in the solution posted on the webpage
∞
X
φj Zt−j−2 .
2.2
2.2.1
Derive autocovariance function for ARMA model
AR(1)
Using the linear representation:
P∞ j [Example 4.10 in Lecture Notes] We
use the representation Xt =
j=0 φ Zt−j and the general formula γX (h) =
P∞
2
σZ j=0 ψj ψj+h to obtain
2 h
γX (h) = σZ
φ
1
.
1 − φ2
Using the recursive method: [Example 4.12 in Lecture Notes]. Take AR(1)
equation Xt = φXt−1 +Zt . Multiply both sides by Xt−h and apply the expected
value to get
E[Xt Xt−h ] = φE[Xt−1 Xt−h ] + E[Zt Xt−h ].
Since E[Xt ] = 0, then E[Xt Xt−h ] = γX (h) and E[XtP
Xt−h ] = γh−1 . Also, for all
∞
h ≥ 1 we can see that Zt is independent of Xt−h = j=0 φj Zt−h−j . Hence,
E[Zt Xt−h ] = E[Zt ]E[Xt−h ] = 0.
(This is the whole trick, if you multiply by Xt+h it will not work).
Hence, we obtain
γX (h) = φγX (h − 1),
or by induction
γX (h) = φh2 γX (0),
h ≥ 1.
2
. We have
We need to start the recursion by computing γX (0) = Var(Xt ) = σX
Var(Xt ) = φ2 Var(Xt−1 ) + Var(Zt )
(again, Xt−1 and Zt are independent.) Since Xt is stationary we get
2
2
2
σX
= φ2 σ X
+ σZ
.
2
Solving for σX
:
2
2
σX
= σZ
1
.
1 − φ2
Finally
2
γX (h) = φh σZ
2.2.2
1
1 − φ2
AR(2)
Using the linear representation:
tion for this model.
We did not derive the linear representa-
Using the recursive method: [Assignment 1, Q5a)]. Take AR(2) equation
Xt = φ1 Xt−1 + φ2 Xt−2 + Zt . Multiply both sides by Xt−h and apply the
expected value to get
E[Xt Xt−h ] = φ1 E[Xt−1 Xt−h ] + φ1 E[Xt−2 Xt−h ] + E[Zt Xt−h ].
Since E[Xt ] = 0, we have E[Xt Xt−h ] = γX (h), E[Xt−1 Xt−h ] = γX (h − 1) and
E[Xt−2 Xt−h ] = γX (h − 2).
2 There
was a typo in Lecture Notes
Also, for all h ≥ 1, Zt is independent of Xt−h . Hence, for h ≥ 1
E[Zt Xt−h ] = E[Zt ]E[Xt−h ] = 0.
Hence, we obtain
γX (h) = φ1 γX (h − 1) + φ2 γX (h − 2).
(5)
2
We need to start the recursion by computing γX (0) = Var(Xt ) = σX
and
γX (1). To get γX (1) we use again AR(2) equation, multiply by Xt−1 and apply
expectation to get
2
E[Xt Xt−1 ] = φ1 E[Xt−1
] + φ2 E[Xt−2 Xt−1 ] + E[Zt Xt−1 ]
| {z }
=0
so that
γX (1) = φ1 γX (0) + φ2 γX (1),
and
γX (1)
(6)
1 − φ2
= γX (0).
φ1
(7)
Now, we need to get γX (0). (Note that this part is done in a little bit different
way as compared to the solution for A1) Take the AR(2) equation, multiply by
Xt and apply expectation to get
E[Xt2 ] = φ1 E[Xt−1 Xt ] + φ2 E[Xt−2 Xt ] + E[Zt Xt ]
Now,
2
E[Zt Xt ] = E[Zt (φ1 Xt−1 + φ2 Xt−2 + Zt )] = σZ
Hence,
2
γX (0) = φ1 γX (1) + φ2 γX (2) + σZ
.
(8)
We already know (equation (5) with h = 2)
γX (2) = φ1 γX (1) + φ2 γX (0) .
We plug-in this expression into (8) to get
2
γX (0) = φ1 γX (1) + φ2 {φ1 γX (1) + φ2 γX (0)} + σZ
(9)
Solving (8)-(9) we obtain
γX (h) = φ1 γX (h − 1) + φ2 γX (h − 2) ,
h≥2,
2
γX (1) = σZ
φ1
(1 + φ2 ) {(1 − φ2 )2 − φ21 }
2
γX (0) = σZ
1 − φ2
(1 + φ2 ) {(1 − φ2 )2 − φ21 }
Note: to check that the last two equations make sense, take φ2 = 0, φ1 = φ.
Then AR(2) reduces to AR(1) and the last two formulas should reduce to γX (0)
and γX (1) for AR(1).
2.2.3
Trivial
MA(q)
2.2.4
ARMA(1,1) and ARMA(1,q)
Using linear representation: For ARMA((1, 1)) (an in general for ARMA(1,q))
use the linear representation with the general formula for the covariance of the
linear process. Specifically, since ψ0 = 1, ψj = φj−1 (φ + θ), j ≥ 1, we have


∞
∞
∞
X
X
X
(θ + φ)2
2
2 2
2
2 
2
.
γX (0) = σZ
ψj2 = σZ
ψ0 +σZ
ψj2 = σZ
1 + (φ + θ)2
φ2(j−1)  = σZ
1+
1 − φ2
j=0
j=1
j=1
Similarly,
(θ + φ)2
2
.
γX (1) = σZ
(θ + φ) + φ
1 − φ2
You can also obtain similar formulas for γX (h). You can also notice that
γX (h) = φh−1 γX (1). That is
(θ + φ)2
2 h−1
γX (h) = σZ φ
(θ + φ) + φ
.
1 − φ2
Using the recursive method:
You take the defining equation
Xt = φXt−1 + Zt + θZt−1 ,
multiply both sides by Xt−h and then try to find a recursive equation, similar
to AR(1) or AR(2).
2.3
The best linear predictor for AR(1) and AR(2) using
Yule-Walker procedure
Yule-Walker equation:
Γn an = γ(n; k)
(10)
an = Γ−1
n γ(n; k).
(11)
or, equivalently,
Formula for MSPEn (k):
γX (0) − aTn γ(n; k)
2.3.1
Find Pn Xn+1 for AR(1):
[Example 5.1 in Lecture Notes]
AR(1) model is given by Xt = φXt−1 + Zt , where Zt are i.i.d. centered with
2
mean zero and variance σZ
and |φ| < 1. Hence, µ = E[Xt ] = 0. Recall that
γX (h) = φh
2
σZ
,
1 − φ2
h ≥ 0.
Then
γ(n; k) = γ(n; 1) = (γX (1), . . . , γX (n))T =
The equation (10) becomes





1 − φ2 

2
σZ
2
σZ
(φ, . . . , φn )T .
1 − φ2

1
φ
..
.
φ
1
..
.
φ2
φ
..
.
φ3
φ2
..
.
···
···
..
.
φn−1
φn−2
..
.
φn−1
φn−2
φn−3
φn−4
···
1










a1
φ
2
..  = σZ  ..  .
.  1 − φ2  . 
an
φn
(12)
Now, either you invert the matrix on the left hand side or you ”guess” the
solution an = (φ, . . . , 0)T . You have to verify that the guessed solution
solves (12). Hence, in AR(1) case the prediction is
Pn Xn+1 = φXn .
2.3.2
Find Pn Xn+2 for AR(1):
Now, we try to guess Pn Xn+2 . If we happen to have observations X1 , . . . , Xn+1 ,
then prediction of the next Xn+2 th value is φXn+1 . However, we have only n
observations, so that in the latter formula we have to ”predict” Xn+1 . The
prediction of Xn+1 has the form φXn . Hence, we may guess that Pn Xn+2 =
φ(φXn ) = φ2 Xn . You have to verify that this is the correct guess.
2.3.3
Find Pn Xn+1 for AR(2):
[Assignment 2, Q1b)] AR(2) model is Xt = φ1 Xt−1 + φ2 Xt−2 + Zt . Hence,
we may guess that the one-step prediction for AR(2) has the form Pn Xn+1 =
φ1 Xn + φ2 Xn−1 , that is (a1 , a2 , . . . , an ) = (φ1 , φ2 , 0, . . . , 0). We verify it by
checking validity of the Yule-Walker equation for two-step prediction:



 

 γX (0)
γX (1)
γX (2)
γX (3)
· · · γX (n − 1) 
γX (1)
 a1

 .  
 γX (1)

..
γX (0)
γX (1)
γX (2)
· · · γX (n − 2) 
  ..  = 

.
.


..
..
..
..
..
..


.
.
.
.
.
.
an
γX (n)
γX (n − 1) γX (n − 2) γX (n − 3) γX (n − 4) · · ·
γX (0)
We have to check whether our choice is correct.
For the first and the second row on the left hand side we get, respectively,
φ1 γX (0) + φ2 γX (1) = γX (1);
φ1 γX (1) + φ2 γX (0) = γX (2).
Now, looking at Assignment 1, Question 4a), equations (1) and (6) with h = 2,
we can recognize that the above equations are exactly formulas that are valid
for covariances of AR(2) model. That is, we verified that our guess was correct.
You can check that all remaining rows on the left hand side reduce to the
recursive formula for AR(2).
We can recognize the first equation to be (6) while the second one is just the
recursive formula for AR(2). That is, we verified that our guess was correct.
2.3.4
MSPEn (1) for AR(1)
[Lecture Notes, p. 15]
MSPEn (1) = γX (0) − aTn γ(n; 1) = γX (0) − φγX (1) =
2.3.5
2
2
σZ
2 σZ
2
−
φ
= σZ
.
1 − φ2
1 − φ2
MSPEn (2) for AR(1)
[Assignment 2, Q1a)]
MSPEn (2) = γX (0) − aTn γ(n; 2) =
2
2
σZ
σZ
σ 2 φ4
σ2
− φ2 γX (2) =
− Z 2 = (1 − φ4 ) Z 2 .
2
2
1−φ
1−φ
1−φ
1−φ
2.3.6
MSPEn (1) for AR(2)
MSPEn (1)
= γX (0) − aTn γ(n; 1) = γX (0) − (φ1 , φ2 )(γX (1), γX (2))T
= γX (0) − φ1 γX (1) − φ2 γX (2) = γX (0) − φ1 γX (1) − φ2 (φ1 γX (1) + φ2 γX (0)) ,
where in the last line I used the recursive formula for AR(2). You can leave it
as it is, or you can plug-in the messy expressions for AR(2).
2.4
2.4.1
Durbin-Levinson algorithm for AR(p)
AR(1)
Find φ11 , φ22 . From Theorem 5.2 in Lecture Notes we have:
φ11 = ρX (1) =
γX (1)
= φ.
γX (0)
Furthermore,
φ22 = [γX (2) − φ11 γX (1)] /v1 = [γX (2) − φγX (1)] /v1 .
(13)
We note that for AR(1) model we have γX (2) = φγX (1), hence φ22 = 0.
2.4.2
AR(2)
Find φ11 , φ22 , φ33 [Assignment 2, Q2]
From Theorem 5.2 in Lecture Notes we have:
φ11 = ρX (1) =
φ1
γX (1)
=
.
γX (0)
1 − φ2
Furthermore,
φ22 = [γX (2) − φ11 γX (1)] /v1 .
(14)
The formulas for covariances of AR(2) are
φ1 γX (h − 1) + φ2 γX (h − 2) = γX (h) ,
γX (1) = γX (0)
h ≥ 2,
(15)
φ1
.
1 − φ2
Use the above formulas (first one with h = 2) and replace γX (2) in (14) to get
φ22
= [φ1 γX (1) + φ2 γX (0) − φ11 γX (1)] /v1
i
φ21
φ21
= γX (0) 1−φ
+
φ
γ
(0)
−
γ
(0)
/v1 .
2
2
X
X
(1−φ2 )
2
h
(16)
(17)
Now, from Theorem 5.2,
v1 = v0 (1 −
φ211 )
φ21
= γX (0) 1 −
(1 − φ2 )2
.
(18)
If you combine (16)-(18) together you will get
φ22 = φ2 .
(19)
Now, for φ33 : recall that this value represents partial autocovariance at lag 3.
It was mentioned in class that for AR(2), PACF vanishes after lag 2. Hence, we
should get φ33 = 0. We will verify it. From Theorem 5.2 we get
φ33 = [γX (3) − φ21 γX (2) − φ22 γX (1)] /v2−1 .
Use (15) with h = 3 to get
φ33 = [(φ1 − φ21 )γX (2) − (φ2 − φ22 )γX (1)] /v2−1 .
Keeping in mind (19), in order to show that φ33 = 0 it is enough to show that
φ21 = φ1 . We use again Theorem 5.2:
φ21 = φ11 − φ22 φ11 = φ11 (1 − φ2 2) =
That is, φ33 = 0.
φ1
(1 − φ2 ) = φ1 .
1 − φ2