Dimension and the Rank

Dimension and the Rank-Nullity Theorem
1. Let L be a line through the origin in R2 .
(a) What is the dimension of the image of the projection projL ? What is the dimension of the kernel
of projL ?
Solution. By definition, the rank of projL is dim(im projL ), and the nullity of projL is dim(ker projL ).
We have seen before that im projL is the line L and ker projL is the line through the origin perpendicular to L. Lines are 1-dimensional, so the rank and nullity of projL are both 1 .
(b) What is the dimension of the image of the reflection ref L ? What is the dimension of the kernel
of ref L ?
Solution. The image of ref L is R2 (any vector in R2 can be the “output” of reflecting over L),
which is 2-dimensional, so the rank of ref L is 2. The kernel of ref L is {~0} (the only vector in R2
that gets reflected to ~0 is ~0), which is 0-dimensional, so the nullity of ref L is 0.

1
0 −1
3
4 −1
2
7
 and ~v1 , . . . , ~v6 be the columns of A. You know how to find
−7
0
4 −6
9 −1 −1 12


1 0
3 0 −2
4
0 1 −2 0
1 −1
.
rref(A), so I’ll just tell you what it is: rref(A) = 
0 0
0 1 −5
0
0 0
0 0
0
0

1
 2
2. Let A = 
−1
3
1
1
2
0
(a) Find a basis of ker A.
~
Solution. Remember that ker A is
just the set of solutions of A~x = 0. To solve this system, we
row-reduce the augmented matrix A 0 . No matter what row operations we do, the last column
will always
of 0s. So, when we row-reduce the augmented matrix, we will just end
consist entirely
up with rref(A) 0 , or


1 0
3 0 −2
4 0
0 1 −2 0
1 −1 0
.

0 0
0 1 −5
0 0
0 0
0 0
0
0 0
If we call our variables x1 , . . . , x6 , then x1 , x2 , x4 are leading variables while x3 , x5 , x6 are free
variables. Letting x3 = s, x5 = t, and x6 = u, we get
  

 
 
 
x1
−3s + 2t − 4u
−3
2
−4
x2   2s − t + u 
 2
−1
 1
  

 
 
 
x3  

 
 
 
s
 =
 = s  1 + t  0 + u  0
x4  

 0
 5
 0
5t
  

 
 
 
x5  

 0
 1
 0
t
x6
u
0
0
1
1
Thus, the vectors
 
 
 
−4
2
−3
 1
−1
 2
 
 
 
 0
 
 1
 
 ~ 2 =  0 , w
w
~1 = 
 5 ~ 3 =  0
 0 , w
 
 
 
 0
 1
 0
1
0
0
span ker A. To see whether these vectors are linearly independent, let’s see whether there are any
nontrivial relations between them. Any linear relation sw
~ 1 + tw
~ 2 + uw
~ 3 = ~0 can be rewritten as

  
0
−3s + 2t − 4u
 2s − t + u  0

  

 0
s

 =  .

 0
5t

  

 0
t
0
u
From this equation, we see that s, t, and u must all be 0. Thus, the only linear relation among
w
~ 1, w
~ 2 , and w
~ 3 is the trivial linear relation, so w
~ 1, w
~ 2 , and w
~ 3 are linearly independent. Since
w
~ 1, w
~ 2 , and w
~ 3 span ker A and are linearly independent, they form a basis of ker A. That is,
     
−4
2
−3
 2 −1  1
     
 1  0  0
  ,   ,   is a basis of ker A.
 0  5  0
     
 0  1  0
1
0
0
(b) What is dim(ker A)? In general, how does the nullity of a matrix A relate to the number of free
and leading variables in the system A~x = ~0?
Solution. In (a), we found a basis of ker A consisting of 3 vectors, so dim(ker A) = 3. In
general, we get one basis vector of ker A for each free variable in the system A~x = ~0. So,
dim(ker A) is the number of free variables in the system A~x = ~0 .
 
c1
c2 
 
c3 

(c) For each basis vector ~c = 
c4  that you found in problem (a), find c1~v1 + c2~v2 + c3~v3 + c4~v4 +
 
c5 
c6
c5~v5 + c6~v6 . (Why does your answer make sense?)
Solution. Let’s first look at ~c = w
~ 1 , the first basis vector we found. Then, c1~v1 + c2 v~2 + c3~v3 +
c4~v4 + c5~v5 + c6~v6 = −3~v1 + 2~v2 + ~v3 , which turns out to just be ~0.
If we use ~c = w
~ 2 , we find that 2~v1 − ~v2 + 5~v4 + ~v5 = ~0.
If we use ~c = w
~ 3 , we find that −4~v1 + ~v2 + ~v6 = ~0.
It’s not just a coincidence that these are all ~0! Remember that c1~v1 +c2 v~2 +c3~v3 +c4~v4 +c5~v5 +c6~v6
2
can be written as the product
 
c1
c2 
 
c3 

· · · ~v6 
c4  ,
 
c5 
c6
~v1
which is just A~c. Since ~c ∈ ker A, A~c must be ~0.
(d) Find a basis of im A. How does dim(im A) relate to dim(ker A)?
Solution. The vectors ~v1 , . . . , ~v6 (the columns of A) span im A. However, we’ve already found
that ~v1 , . . . , ~v6 are linearly dependent, so we need to throw some of them out to get a basis of
im A.
In (c), we found three linear relations among ~v1 , . . . , ~v6 :
−3~v1 + 2~v2 + ~v3 = ~0
2~v1 − ~v2 + 5~v4 + ~v5 = ~0
−4~v1 + ~v2 + ~v6 = ~0
If we solve each of these for the “last” vector in the equation, we get
~v3 = 3~v1 − 2~v2
~v5 = −2~v1 + ~v2 − 5~v4
~v6 = 4~v1 − ~v2
From these equations, we see that ~v3 , ~v5 , and ~v6 are all in span(~v1 , ~v2 , ~v4 ). So, ~v1 , ~v2 , ~v4 span
im A.
Are ~v1 , ~v2 , ~v4 linearly independent? Well, if they were linearly dependent, we should have
 come
c1
c2 
 
0

~
up with a non-trivial linear relation c1~v1 + c2~v2 + c4~v4 = 0, and that would mean that 
c4  was
 
0
0
in ker A. But we found ker A in (a), and we see that there is no such vector in ker A. So, ~v1 , ~v2 , ~v4
are linearly independent, and (~v1 , ~v2 , ~v4 ) is a basis of im A.
Notice that our basis of im A consists of the first, second, and fourth columns of A, and we ended
up with these columns because the first, second, and fourth columns of rref A were the columns
with leading ones. In particular, dim(im A) is exactly the leading ones in rref A, or the number
of leading variables in the system A~x = ~0.
Since dim(ker A) is the number of free variables in the system A~x = ~0, we conclude that
dim(im A) + dim(ker A) = number of columns of A



2 4
1
1
1
5. Given that rref(A) = 0
3. Let A = 1 2 −1
1 2
3 −7
0
rank A? What is the nullity of A?
3
2
0
0
0
1
0

2
−3, find a basis of im A. What is
0
Solution. We’ll use what we found in #2(d). The columns of rref(A) that have leading 1s are the
   
2
1
first and third, so the first and third columns of A form a basis of im A. That is, 1 , −1 is
1
3
a basis of im A. Since this basis has 2 vectors, rank A = dim(im A) = 2 .
4. True or false.
(a) If A is a 4 × 3 matrix and the equation A~x = ~0 has no nonzero solutions, then rank A = 3.
Solution. The fact that A~x = ~0 has no nonzero solutions is exactly saying that ker A = {~0}, so
dim(ker A) = 0. By the rank-nullity theorem, dim(ker A) + rank A = 3, so rank A = 3. So, the
statement is true .
(b) There is a linear transformation T : R4 → R3 with kernel {~0}.
Solution. Intuitively, you should see that this is impossible: the domain of T (R4 ) is bigger than
the codomain (R3 ), so several vectors in the domain must get sent to the same output in the
codomain (which means that the kernel must have more than just ~0 in it).
We can justify this more formally using the rank-nullity theorem: since im T is a subspace of R3 ,
dim(im T ) ≤ 3. By the rank-nullity theorem, dim(ker T ) + dim(im T ) = 4, so dim(ker T ) ≥ 1, and
the statement is false .
(c) If A is a 4 × 3 matrix and the equation A~x = ~0 has no nonzero solutions, then A~x = ~e1 must be
consistent.
Solution. This question is asking us whether ~e1 must be in im A. We saw in (a) that such a
matrix A must have rank 3. This means that im A is a 3-dimensional subspace of R4 , which is
not enough information to tell us whether ~e1 is in im A. So, the statement is false .


0 0 0
1 0 0

A concrete counterexample is the matrix A = 
0 1 0.
0 0 1
(d) There is a 3 × 6 matrix whose kernel is two-dimensional.
Solution. The image of a 3 × 6 matrix A is a subspace of R3 , so dim(im A) ≤ 3. By the
rank-nullity theorem, dim(im A) + dim(ker A) = 6. So, we can conclude that dim(ker A) ≥ 3.
Therefore, the statement is false .
(e) If A is a 3×2 matrix whose image is a line passing through the origin, then there exist two linearly
independent vectors in ker(A).
Solution. Since the image of A is a line, dim(im A) = 1. By the rank-nullity theorem,
dim(ker A) = 2 − dim(im A) = 1, so there cannot be two linearly independent vectors in ker A.
So, the statement is false .
 
1
(f) If A is a 3 × 5 matrix whose kernel is 2-dimensional, then the system A~x = 3 has infinitely
5
many solutions.
4
Solution. If A is a 3 × 5 matrix whose kernel is 2-dimensional, then dim(im A) = 3 by the
3
rank-nullity theorem. But since im A is a subspace
of
 of R and the only 3-dimensional subspace
 
1
1
R3 is R3 itself, this means that im A = R3 . So, 3 is in im A, and the system A~x = 3 has at
5
5
 
1
least one solution ~x1 . Then, the solutions of A~x = 3 are ~x1 + ker A. In particular, since ker A
5
 
1
is 2-dimensional, it is infinite, so A~x = 3 must have infinitely many solutions. Therefore, the
5
statement is true .
(g) There exists a 5 × 4 matrix whose image is R5 .
Solution. By the rank-nullity theorem, if A is a 5 × 4 matrix, then dim(ker A) + dim(im A) = 4.
In particular, dim(im A) ≤ 4, so im A cannot be R5 . Therefore, the statement is false .
5