STAC67H: Regression Analysis Fall, 2014 Instructor: Jabed Tomal October 26, 2014

STAC67H: Regression Analysis
Fall, 2014
Instructor: Jabed Tomal
Department of Computer and Mathematical Sciences
University of Toronto Scarborough
Toronto, ON
Canada
October 26, 2014
Jabed Tomal (U of T)
Regression Analysis
October 26, 2014
1 / 18
Matrix Approach to Simple Linear Regression
Fitted Values
ˆi be denoted by Y:
ˆ
Let the vector of the fitted values Y

ˆ  
Y1
b0 + b1 X1
Y
ˆ 2   b0 + b1 X2 

 
ˆ =
Y

 . =
..
n×1

 ..  
.
ˆn
b0 + b1 Xn
Y
In matrix notation, we have:
ˆ = X
Y
n×1
Jabed Tomal (U of T)
b
n×2 2×1
Regression Analysis
October 26, 2014
2 / 18
Matrix Approach to Simple Linear Regression
Hat Matrix
ˆ as follows by using the
We can express the matrix result for Y
expression for b
ˆ = X X0 X
Y
−1
X0 Y
or, equivalently:
ˆ = H
Y
n×1
Y
n×n n×1
(linear combinations of the response variable observations Yi ).
where:
−1 0
X
H = X X0 X
n×n
which involves only the observations on the predictor variable X .
Jabed Tomal (U of T)
Regression Analysis
October 26, 2014
3 / 18
Matrix Approach to Simple Linear Regression
Hat Matrix
The square n × n matrix H is called the hat matrix.
Here, H is a symmetric matrix, i.e.,
H0 = H.
The matrix H is idempotent, i.e.,
HH = H.
Jabed Tomal (U of T)
Regression Analysis
October 26, 2014
4 / 18
Matrix Approach to Simple Linear Regression
Residuals
ˆi be denoted by e:
Let the vector of the residuals ei = Yi − Y
 
e1
 e2 
 
e = . 
 .. 
n×1
en
In matrix notation, we have:
ˆ = Y − Xb
e = Y − Y
n×1
Jabed Tomal (U of T)
n×1
n×1
n×1
Regression Analysis
n×1
October 26, 2014
5 / 18
Matrix Approach to Simple Linear Regression
Variance-Covariance Matrix of Residuals
The residuals ei can be expressed as linear combinations of the
response observations Yi :
ˆ = Y − HY = (I − H)Y
e=Y−Y
We have the following result:
e =( I − H) Y
n×1
n×n
n×n n×1
The matrix (I − H) is symmetric and idempotent.
Jabed Tomal (U of T)
Regression Analysis
October 26, 2014
6 / 18
Matrix Approach to Simple Linear Regression
Variance-Covariance Matrix of Residuals
The variance-covariance matrix of the vector of residuals e is
σ 2 {e} = σ 2 (I − H)
n×n
and is estimated by
s2 {e} = MSE × (I − H)
n×n
Jabed Tomal (U of T)
Regression Analysis
October 26, 2014
7 / 18
Matrix Approach to Simple Linear Regression
Analysis of Variance
In matrix notation, the total sum of squares is
SST =
n
X
¯ )2 =
(Yi − Y
i=1
where
Jabed Tomal (U of T)
n
X
Yi2
−
(
Pn
2
i=1 Yi )
i=1
n
1
=YY−
Y0 JY
n
0


1 1 ··· 1
1 1 · · · 1


J = . . .

.
.
.
.
.
n×n
. .
. .
1 1 ··· 1
Regression Analysis
October 26, 2014
8 / 18
Matrix Approach to Simple Linear Regression
Analysis of Variance
The error sum of squares is
SSE = e0 e = (Y − Xb)0 (Y − Xb)
which simplifies to
SSE = Y0 Y − b0 X0 Y
Jabed Tomal (U of T)
Regression Analysis
October 26, 2014
9 / 18
Matrix Approach to Simple Linear Regression
Analysis of Variance
The regression sum of squares is
1
SSR = SST − SSE = b X Y −
Y0 JY
n
0 0
Jabed Tomal (U of T)
Regression Analysis
October 26, 2014
10 / 18
Matrix Approach to Simple Linear Regression
Sum of Squares as Quadratic Forms
A quadratic form is defined as
0
Y AY =
1×1
n X
n
X
aij Yi Yj
where aij = aji
i=1 j=1
A is a symmetric n × n matrix and is called the matrix of the quadratic
form.
Jabed Tomal (U of T)
Regression Analysis
October 26, 2014
11 / 18
Matrix Approach to Simple Linear Regression
Sum of Squares as Quadratic Forms
Result 1:
ˆ 0 = (Xb)0 = b0 X0
Y
Result 2:
b0 X0 = (HY)0
Result 3:
b0 X0 = Y0 H
Jabed Tomal (U of T)
Regression Analysis
October 26, 2014
12 / 18
Matrix Approach to Simple Linear Regression
Sum of Squares as Quadratic Forms
The sum of squares in terms of quadratic forms are as follows
Total sum of squares:
1
J Y = Y0 A1 Y
SST = Y I −
n
0
Error sum of squares:
SSE = Y0 [I − H] Y = Y0 A2 Y
Regression sum of squares:
1
0
SSR = Y H −
J Y = Y0 A3 Y
n
The matrices A1 , A2 and A3 are symmetric.
Jabed Tomal (U of T)
Regression Analysis
October 26, 2014
13 / 18
Matrix Approach to Simple Linear Regression
Inferences in Regression Coefficients
The variance-covariance matrix of b:
2
σ {b0 } σ{b0 , b1 }
2
σ {b} =
σ{b0 , b1 } σ 2 {b1 }
2×2
In short:
σ 2 {b} = σ 2 X0 X
−1
2×2
Jabed Tomal (U of T)
Regression Analysis
October 26, 2014
14 / 18
Matrix Approach to Simple Linear Regression
Inferences in Regression Coefficients
The variance-covariance matrix of b:

¯2
1
X
+ Pn (X
¯ 2
2
2 n
i −X )
i=1
σ {b} = σ
¯
2×2
Pn −X ¯ 2
i=1 (Xi −X )

¯
Pn −X ¯ 2
(X
i −X ) 
i=1
Pn 1 ¯ 2
i=1 (Xi −X )
The sample variance-covariance matrix of b:


¯
¯2
−X
1
Pn X
P
+
n
¯ 2
¯ 2
n
i=1 (Xi −X )
i=1 (Xi −X ) 
s2 {b} = MSE 
¯
−X
1
2×2
Jabed Tomal (U of T)
Pn
¯ 2
i=1 (Xi −X )
Regression Analysis
Pn
¯ 2
i=1 (Xi −X )
October 26, 2014
15 / 18
Matrix Approach to Simple Linear Regression
Inferences in Mean Response
To estimate the mean response at Xh , let us define the vector:
1
Xh =
X
h
2×1
The fitted value in matrix notation is
ˆ h = X0 b
Y
h
Jabed Tomal (U of T)
Regression Analysis
October 26, 2014
16 / 18
Matrix Approach to Simple Linear Regression
Inferences in Mean Response
ˆ h in matrix notation is
The variance of Y
ˆ h } = σ 2 X0 (X0 X)−1 Xh
σ 2 {Y
h
ˆ h in matrix notation is
The estimated variance of Y
ˆ h } = MSE X0 (X0 X)−1 Xh
s2 {Y
h
ˆ h can be expressed as a function of σ 2 {b}
The of Y
ˆ h } = X0 σ 2 {b}Xh
σ 2 {Y
h
Jabed Tomal (U of T)
Regression Analysis
October 26, 2014
17 / 18
Matrix Approach to Simple Linear Regression
Prediction of New Observation
The estimated variance of s2 {pred} in matrix notation is
s2 {pred} = MSE 1 + X0h (X0 X)−1 Xh
Jabed Tomal (U of T)
Regression Analysis
October 26, 2014
18 / 18