MA1101R Linear Algebra I AY 2010/2011 Sem 1 NATIONAL UNIVERSITY OF SINGAPORE

MA1101R
Linear Algebra I
AY 2010/2011 Sem 1
NATIONAL UNIVERSITY OF SINGAPORE
MATHEMATICS SOCIETY
PAST YEAR PAPER SOLUTIONS
with credits to Luo Xuan and Zhang Manning
MA1101R Linear Algebra I
AY 2010/2011 Sem 1
Question 1
(a) Proof:
First we show that span(S1 ) ⊆ span (S2 ).
Since u = (u − 2v) + 2(v − 2w) + 4w, we have u ∈ span (S2 ), similarly, we get v ∈ span (S2 ),and
w ∈ span (S2 ), which implies that span(S1 ) ⊆ span (S2 ).
On the other hand, it is easy to see that all of u − 2v, v − 2w and w belong to span(S1 ), hence,
span(S2 ) ⊆ span (S1 ).
Hence, we conclude that span(S1 ) = span (S2 ).
(b) (i) T2 is linearly independent. Let a(u + 2v) + b(v + 2w) + c(u + w) = 0, then (a + c)u + (2a +
b)v + (2b + c)w = 0. Since u, v, and w are linearly independent, we have three equations, a + c = 0;
2a + b = 0; 2b + c = 0. Then a = b = c = 0, which implies that T2 is linearly independent.
(ii)Consider the matrix:


1 2 0
 0 1 2 
1 0 1
whose determinant is not equal to 0, hence, span(T1 ) ⊆ span (T2 );
It is easy to show that span(T2 ) ⊆ span (T1 ).
Therefore, span(T1 ) = span (T2 ).
(c) Suppose the set X ∪ v is linearly dependent. Let a1 u1 + a2 u2 + ... + ak uk + bv = 0, if b=0, then
a1 u1 + a2 u2 + ... + ak uk = 0, which implies that a1 = ... = ak = 0, since X is linearly independent.
Hence, the set X ∪ v is linearly independent.
If b 6= 0, then v is a linear combination of u1 , ...uk , which contradicts the fact that v ∈
/ X.
Therefore, the set X ∪ v is linearly independent.
Question 2
(a)


−3 1 3
4
B =  1 2 −1 −2 
−3 8 3
2
reduces to the matrix


1 2 −1 −2
C =  0 7 0 −2 
0 0 0
0
NUS Math LaTeXify Proj Team
Page: 1 of 5
NUS Mathematics Society
MA1101R
Linear Algebra I
AY 2010/2011 Sem 1
(i). A basis for the row space of B is {(1, 2, −1, −2), (0, 7, 0, −2)}
(ii). A basis for the column space of B is {(−3, 1, −3)t , (1, 2, 8)t }
2
t
(iii). A basis for the null space of B is {(1, 0, 1, 0)t , ( 10
7 , 7 , 0, 1) }
(iv). First we note that transposing xB = 0, we get B t xt = 0,
the basis for the null space of B t is{(0, 0, 1)t }
Hence, a basis for the left null space of B is {(0, 0, 1)}
(b) If the left null space ofA has only the trivial solution,then the transpose of A has only the trivial
solution as well, which implies that the rank of A is equal to the number of rows of At , which again
implies that rank(A) equals to the number of columns of A, Hence, this linear system,i.e., Ax = b
is consistent for every vector b.
Question 3
(a)
(i).
F1 =
a+1
1
1
a+1
det(F2 ) = a2 + 2a;


a+1
1
1
 1
a+1
1 
1
1
a+1
det(F3 )=a3 +3a2 .
(ii). det(Fn )=an +(n)an−1
We show it by induction.
First when n = 2, det(F2 ) = a2 + 2a, satisfied;
Suppose when n = k, det(Fk ) = ak + (k)ak−1 ,
det(Fk+1 ) = (a + 1)det(Fk ) − (k)ak−1
= ak+1 + (k + 1)ak ,
satisfied.
Hence, det(Fn )=an +(n)an−1 for general integer n ≥2.
(b) The statement is wrong,
consider the matrix
A=
0 −1
1 0
which satisfies the conditions given, but is invertible.
Question 4
NUS Math LaTeXify Proj Team
Page: 2 of 5
NUS Mathematics Society
MA1101R
Linear Algebra I
AY 2010/2011 Sem 1
(a)

 
1
−2

T  0  =  −3
2
−3
0
2

(i). The standard matrix for T is


−2 1 3
1 2 
A =  −3
2
−3
1 2
2
(ii). The basis of the null space of A is {(2, 1, 1)t }, the kernel of T hence is span{(2, 1, 1)t }.
(iii). Let Ax={(2, 1, 1)t }, then T(T(x))=0. Hence, the kernel of T(T) is the set of the values of x, i.e.
span{(0, −1, 1)t }.
(iv). Ker(T ) = null space of A
R(T )= column space of A
Ker(T ) ∩ R(T ) = 0,
the basis of 0 is empty set.
(b) For any vector x in R3 ,
S(x)= 2(ux)u − x
Since u is a unit vector, then uu = 1.
S(S(x)) = S(2(ux)u − x)
= 2(ux)S(u) − S(x)
= 2(ux)(2(uu)u − u) − 2(ux)u − x
= 2(ux)u − 2(ux)u + x
=x
Therefore, S ◦ S is the identity transformation.
Question 5
(i). If w ∈ V , then let



v1 = 


1
0
0
0
1






 ; v2 = 




0
0
1
0
0






 ; v3 = 




1
1
1
1
1






Let a v1 + bv2 + cv3 = w;
a + c = 2,c = 1,b + c = 2; a = b = c = 1, but for the last term, a + c = 1, contradiction.
Hence, w ∈
/V
NUS Math LaTeXify Proj Team
Page: 3 of 5
NUS Mathematics Society
MA1101R
Linear Algebra I
AY 2010/2011 Sem 1
(ii). Apply the Gram-Schmidt Process, let u1 = v1 ,
then u2 = v2 − 0 = v2 , similarly, u3 = v3 − v1 − v2 ,
Therefore, an orthogonal basis T for the V is















1
0
0
0
1
 
 
 
,
 
 
0
0
1
0
0
 
 
 
,
 
 
0
1
0
1
0












(iii). It is easy to find the transition matrix from S to T is


1 0 1
P = 0 1 1 
0 0 1
(iv). The transition matrix from T to S is


1 0 −1
P −1  0 1 −1 
0 0 1
(v).
3
p = v1 + 2v2 + v3
2
3
3
= {( , 1, 2, 1, )t }
2
2
3
[p]T = {( , 2, 1)t }
2
(vi). [p]S = P −1 [p]T = {( 21 , 1, 1)t }
Question 6
(a) det(λI − C)=0, we get (λ − 1)(λ − 2)(λ + 2)=0, when λ = 1, E1 =span(3, 1, 2)t ; Similarly, we have
E2 =span(0, 3, 1)t and E−2 =span(0, 1, −1)t ; Hence,


3 0 0
P = 1 3 1 
2 1 −1
and


1 0 0
D= 0 2 0 
0 0 −2
(b)
NUS Math LaTeXify Proj Team
Page: 4 of 5
NUS Mathematics Society
MA1101R
Linear Algebra I
AY 2010/2011 Sem 1
(i). Let the eigenvectors corresponding to the eigenvalues be u1 , u2 , u3 ; It’s easy to see that u1 , u2 , u3 are
linearly independent since the number of distinct eigenvalues is 3 which is equal to the dimension
of A. Then Eλ1 ∩ Eλ2 = span{u1 }∩span{u2 } = 0 since u1 , u2 are linearly independent.
(ii). Yes.
Eλ1 ∪ Eλ2 ∪ Eλ3 = span{u1 , u2 , u3 }
= R3
since u1 , u2 , u3 are linearly independent.
(c) ⇒
M is symmetric and positive def inite. Since any symmetric matrix is orthogonally diagonalizable,
let P −1 M P = D, then M = P DP −1 , for any nonzero vector x, xT M x > 0. Hence, xT P DP −1 x >
0, i.e. (P T x)T DP T x > 0 since P T = P −1 . It is easy to see that P T x can take any nonzero vector
when x varies, since the columns of P T can form a basis of Rn . Hence, D is positive def inite.
Note that the diagonal entries of D are the eigenvalues of M , λ1 , .., λn . Let the kth entry of x be
1 and 0 otherwise, then λk > 0 since xT Dx > 0
⇐
If all the eigenvalues of M are strictly positive, then D is positive def inite. Hence, xT Dx > 0 for
any nonzero vector. Hence, xT P −1 M P x > 0, i.e., (P x)T M P x > 0. Likewise from the argument
above, we can see that M is positive def inite.
(d) Proof:
⇒
Q is invertible, then Qv 6= 0 for any nonzero vector v. v T QT Qv = (Qv)T Qv > 0 since Qv 6= 0,
Hence, QT Q is positive def inite.
⇐
If it was not, i.e., Q is not invertible, then there must be a non-zero vector x such that Qx = 0.
Therefore xT QT Qx = (Qx)T Qx = 0 which contradicts our assumption of QT Q being positive
def inite.
NUS Math LaTeXify Proj Team
Page: 5 of 5
NUS Mathematics Society