Fall 2014: MATH-GA: 2111.001 Linear Algebra (one term)

Fall 2014: MATH-GA: 2111.001
Linear Algebra (one term)
Assignment 3 (due Oct. 23, 2014)
1. [3pt] Let
α
2
B=
.
2 3+α
For which values of α ∈ R is there a symmetric matrix A such that
4 7
AB + BA =
?
7 12
2. [3pt] Which of the following statements are true for any pair of matrices A, B, C ∈ Rn×n
(n ≥ 1)? Either prove the statement or provide a counter example.
(a) tr(AB 2 A) = tr(BA2 B).
(b) tr(ABC) = tr(CBA).
(c) tr(A2 ) = tr(A)2 − n det(A). Hint: Try diagonal matrices first.
3. [3pt] Let A, B ∈ Rn×n , and denote by B T the transpose of B. What can you say about A
under each of the following conditions? Prove your answers. Hint: Choose simple matrices
for B that satisfy the stated conditions.
(a) tr(AB) = 0 for all B.
(b) tr(AB) = 0 for all B such that B T = B.
(c) tr(AB) = 0 for all B such that B T = −B.
4. [1+2pt] A matrix A ∈ Rn×n is called monotone, if for every u = (u1 , . . . , uN )T , Au ≥ 0
(i.e., all components of Au are non-negative) implies that u ≥ 0 (i.e., all components of
u are non-negative).
(a) Show that any monotone matrix A ∈ RN ×N is invertible.
(b) Show that a matrix A is monotone if and only if the entries of A−1 are all non-negative.
5. [Extra credit, 3pt] Let P ∈ Rn×n be a matrix describing a projection, i.e., P 2 = P , and
let the dimension of the image under P (i.e., the rank of P ) be k < n.
(a) Let A = I − 2P . Calculate AP and A17 .
(b) Express the determinant of A in terms of n and k.
6. [2+2+2pt] We have seen that the second derivative (and its negative) T = −d2 /(dx)2
is a linear operator as a mapping between spaces of (sufficiently smooth) functions, say
T : H → L, where
H, L ⊂ {u : [0, 1] → R is a function}.
1
Then, for given f ∈ L, we attempt to solve the linear problem
T (u) = −u00 = f in (0, 1), and u(0) = 0, u(1) = 0
(1)
for a function u. In one space dimension1 , this so-called boundary value problem can
be solved analytically by integrating f twice. Unfortunately, in higher dimensions, the
analogous problem often cannot be solved analytically and one must rely on numerical
approximations for u. These approximations replace the infinite-dimensional2 problem (1)
by a finite-dimensional problem, which allows to employ tools from (finite-dimensional)
linear algebra.
(a) Let us first restrict ourselves to the finite-dimensional polynomial spaces (n ≥ 1)
ˆ := {p ∈ Pn+2 : p(0) = p(1) = 0} ⊂ H,
H
ˆ := Pn ⊂ L.
L
ˆ →L
ˆ is an isomorphism,
Show that with this choice of finite-dimensional spaces, T : H
ˆ (1) has a unique solution uˆ ∈ H.
and argue why this implies that for every f ∈ L,
ˆ
ˆ is an isomorphism is to study the
Hint: One possibility to show that T : H → L
null space NT , use the dimension formula for linear mappings and problem 6d from
homework assignment #1.
(b) The accuracy of polynomial approximations mainly depends on how well f in (1) can
be approximated with a polynomial. The smoother the function f , the better it can be
approximated with polynomials. To illustrate this, let us consider the (not so smooth)
function
(
0 for x ∈ [0, 0.5],
f (x) =
(2)
1 for x ∈ (0.5, 1].
We want to find the polynomial p ∈ Pn that interpolates f at uniformly spaced points
α1 , . . . , αn ∈ (0, 1), i.e., p(αi ) = f (αi ), for i = 1, . . . , n. Give expressions of p
using the Lagrange basis for the points (α1 , . . . , αn ) and in the monomial basis. Hint:
Use the Vandermonde matrix for the change into the monomial basis. Just give the
expressions, no need to compute anything here.
(c) Plot the polynomial approximation for f in MATLAB for different n. Discuss your
observations.
ˆ that solves
(d) [Extra credit, 2pt] Use MATLAB to compute the polynomial u ∈ H
T (u) = p, and visualize your result.
1
The generalization of (1) to two and three-dimensional domains Ω instead of the one-dimensional interval
Ω = [0, 1] is the Laplace equation,
−∆u = f on Ω,
u = 0 on ∂Ω,
which is one of the most important partial differential equations in mathematical physics.
2
Note that H and L are infinite-dimensional linear spaces.
2
7. [1+1+2+2+1+2+2pt] Continuation of previous problem: Alternatively to a polynomial
approximation, (1) can be approximated using a finite number of grid points in [0, 1] and
finite-difference approximations for the second derivative: We choose the uniformly spaced
points {xi = ih : i = 0, 1, . . . , N, N + 1} ⊂ [0, 1], with h = 1/(N + 1), and approximate
u(xi ) ≈ ui and f (xi ) ≈ fi , for i = 0, . . . , N + 1.
(a) Using Taylor expansions of u(xi − h) and u(xi + h) (assume that u is smooth enough)
about u(xi ) show that
−u(xi − h) + 2u(xi ) − u(xi + h)
+ h.o.t.,
(3)
h2
where h.o.t. stands for a remainder term that is of higher order in h, i.e., becomes
small as h becomes small.
(b) Motivated by (3) we approximate the second derivative at the point xi as follows3 :
−ui−1 + 2ui − ui+1
.
(4)
− u00 (xi ) ≈
h2
Show that using (4) for each grid point xi and using the boundary conditions u0 =
uN +1 = 0, one finds the following finite-dimensional approximation of (1):


2 −1 0 · · · 0  u   f 
..   1   1 

−1 2 −1
.   u2   f2 

1 
.   . 
 0 ... ... ... 0  
(5)
 ..  =  ..  .


2
 

h  .


 fN −1 
 ..
−1 2 −1 uN −1
uN
fN
0 · · · 0 −1 2
− u00 (xi ) =
(c)
(d)
(e)
(f)
Next, we study properties of the matrix in (5), which we denote by K ∈ RN ×N 4 .
Show that K is monotone (and thus invertible as shown above).
Find the eigenvectors of K and compute the corresponding eigenvalues. Hint: Try
vectors with components vi = sin(2kπxi ), with appropriate k. Hint: You will need
a trigonometric identity for sums of the form sin(a + b) + sin(a − b). If you do not
know that identity by heart, remind yourself on Wikipedia5 , for instance.
Using the eigenvalues, give an expression for the determinant of K and argue (once
again) that K is invertible. What happens to the determinant as N gets larger (i.e.,
the finite-dimensional approximation of (1) becomes more accurate)?
Compute for an N ≥ 20 of your choice the eigenvalues and eigenvectors of K numerically6 . Plot some of these numerical eigenvectors7 and compare with the analytically
computed eigenvectors.
3
This is known as finite difference approximation of the (negative) second derivative.
Ideally, the discrete finite-dimensional approximation should have as many properties as possible with the
infinite-dimensional operator in common to be a good approximation. The properties we show for K below are,
appropriately adjusted, also properties of the infinite-dimensional operator T .
5
http://en.wikipedia.org/wiki/List_of_trigonometric_identities
6
MATLAB provides the functions eig, and a fast way to build the matrix K is to use the command spdiags.
Use MATLAB’s help command to learn more about these functions.
7
Plot the eigenvectors as functions, i.e., plot for each grid point xi the corresponding component of the
eigenvector—this should look like Fourier modes.
4
3
(g) Finally, for different N , compute and visualize the solution vectors u of (5) for the
right hand side vector given by (f (x1 ), . . . , f (xN ))), where the function f (x) is defined
in (2).
4