approximating solutions of boundary value problems

APPROXIMATING SOLUTIONS OF BOUNDARY VALUE PROBLEMS
by
Hamid Semiyari
A dissertation submitted to the faculty of
the University of North Carolina at Charlotte
in partial fulfillment of the requirements
for the degree of Doctor of Philosophy in
Applied Mathematics
Charlotte
2015
Approved by:
Dr. Douglas S. Shafer
Dr. Joel D. Avrin
Dr. James Sochacki
Dr. Vincent Ogunro
ii
c
2015
Hamid Semiyari
ALL RIGHTS RESERVED
iii
ABSTRACT
HAMID SEMIYARI. Approximating Solutions of Boundary Value Problems.
(Under the direction of DR. DOUGLAS S. SHAFER)
We present a new algorithm for approximating solutions of two-point boundary value problems and prove theorems that give conditions under which the solution must exist and the algorithm
generate approximations that converge to it. We show how to make the algorithm computationally efficient and demonstrate how the full method works both when guaranteed to do so and more
broadly. We demonstrate that the method compares well against other methods commonly used
in the literature. We also prove a theorem on existence of solutions of certain multi-dimensional
Volterra integral equations and use it to show that the Parker-Sochacki method of introducing auxiliary variables, used to make the new algorithm computationally efficient, can be effectively applied
to these Volterra integral equations in order to approximate their solutions by means of a Picard iteration scheme. Finally, we extend the existence theorem for solutions of two-point boundary value
problems and prove that the new algorithm can be modified to approximate solutions in this case.
iv
ACKNOWLEDGMENTS
v
TABLE OF CONTENTS
LIST OF FIGURES
vii
LIST OF TABLES
viii
CHAPTER 1: BACKGROUND
1
1.1 Initial Value Problems
1
1.2 Boundary Value Problems
8
1.3 Equivalences, Norms, and Contraction Mappings
CHAPTER 2: APPROXIMATING SOLUTIONS OF BOUNDARY VALUE PROBLEMS
10
13
2.1 An Efficient Algorithm for Approximating Solutions of Two-Point Boundary Value
Problems
13
2.2 Examples
20
2.3 Mixed Type Boundary Conditions
29
2.4 Mixed Type Boundary Conditions: A Second Approach
32
CHAPTER 3: COMPARISON OF METHODS
3.1 Linear Cases
36
36
3.1.1 Runge Kutta with Shooting Method
37
3.1.2 Finite difference method
38
3.1.3 Adomian
39
3.1.4 Power Series Method
42
3.1.5 Our New Method
43
3.1.6 A Comparison of Methods
44
3.2 Non-Linear Case
45
3.2.1 Runge Kutta with Shooting Method)
45
3.2.2 Finite Difference Method
46
3.2.3 Adomian
47
3.2.4 Power Series Method
49
3.2.5 Our New Method
50
vi
3.2.6 Methods Comparison
CHAPTER 4: VOLTERRA INTEGRAL EQUATION
51
53
4.1 Uniqueness Theorem
54
4.2 Examples
56
CHAPTER 5: A MORE GENERAL ALGORITHM FOR APPROXIMATING
SOLUTIONS OF BOUNDARY VALUE PROBLEMS
65
5.1 The Simplest Case
65
5.2 The General Case
75
5.3 Examples
82
vii
LIST OF FIGURES
FIGURE 2.1 Error plot |yexact (t) − y[5] (t)| for Example 2.2.1.
23
FIGURE 2.2 Error plot |yexact (t) − y[5] (t)| for Example 2.2.5.
27
FIGURE 2.3 Error plot |yexact (t) − y[12] (t)|for Example 2.2.7.
29
FIGURE 4.1 The error plot for Volterra Integral, |y(t) − y[8] | with n = 7 iterations, where y(t)
is the exact solution and y[8] (t) is the approximate solution.
62
FIGURE 4.2 Error plot for nonlinear Volterra Integral, |y(t) − y[8] (t)| with n = 8 iterations,
where y(t) is the exact solution and y[8] (t) is the approximate solution.
63
FIGURE 4.3 The error plot for linear Volterra Integral, |y(t) − y[28] (t)| with n = 27 iterations,
where y(t) is the exact solution and y[28] (t) is the approximate solution.
64
viii
LIST OF TABLES
CHAPTER 1: BACKGROUND
1.1
Initial Value Problems
In the first part of this work we will be primarily interested the solution of boundary value
problems associated to second order differential equations in a single dependent variable that can be
placed in the form y00 = f (t, y, y0 ). It is well known that an nth order ordinary differential equation
in a single dependent variable that can be solved for the nth derivative of the dependent variable can
be transformed into a system of n first order ordinary differential equations. For given
y(n) = f (t, y, y0 , . . . , y(n−1) ),
(1.1)
where f : Ω → R for an open set Ω ⊂ R × Rn we introduce n − 1 additional dependent variables
y1 = y0 ,
y2 = y01 ,
... ,
yn−1 = y0n−2
so that (1.1) is equivalent to
y0 = y1
y01 = y2
..
.
y0n−1 = f (t, y, y1 , . . . , yn−1 )
which we write more succinctly in vector notation as
y0 = f(t, y)
(1.2)
2
where







 y1 


 . 

.
y= . 





yn−2 


yn−1





f(t, y) = 





y
and
y




y1


..
.
.




yn−2

f (t, y1 , . . . , yn−1 )
An initial value problem associated to (1.2) is the system (1.2) together with the additional
requirement that the solution of (1.2) satisfy the initial condition y(t0 ) = y0 for some pair (t0 , y0 ) ∈
Ω. A fundamental result concerning existence and uniqueness of solutions of initial value problems
is the Picard-Lindel¨of Theorem. Its statement requires the following definition.
Definition 1.1.1. Let E be an open subset of Rm × Rn . A mapping f : E → R p : (y, z) 7→ f(y, z) is
(uniformly) Lipschitz on E with respect to y if there exists a constant L such that
|f(y, z) − f(y0 , z)|1 6 L|y − y0 |2 for all (y, z), (y0 , z) ∈ E
with respect to some pre-assigned norms | · |1 and | · |2 . The mapping f is locally Lipschitz on E with
e 0 , z0 ) of (y0 , z0 ) in E on which
respect to y if for each (y0 , z0 ) ∈ E there exists a neighborhood E(y
f is uniformly Lipschitz with respect to y.
Since all norms on finite dimensional vector spaces are equivalent the choice of norms in the
definition is unimportant, although the constant L will depend on the specific choice of norms.
Theorem 1.1.2 (Picard-Lindel¨of Theorem). Let I × E be an open subset of R × Rn . If f : I × E → Rn
is continuous and locally Lipschitz in y then for each pair (t0 , y0 ) ∈ I × E there exists a solution to
the initial value problem
y0 = f(t, y),
y(t0 ) = y0
(1.3)
on an open interval J ⊂ I about t0 , and any two such solutions agree on their common domain about
t0 .
There are two popular methods for proving the theorem, both based on the fact that the initial
3
value problem (1.3) is equivalent to the integral equation
Z t
y(t) = y0 +
f(s, y(s)) ds
(1.4)
t0
(see Lemma 1.3.1 for the precise statement). In the Picard iteration approach to the proof of the
theorem, a sequence of mappings is defined by y0 (t) ≡ y0 and
Z t
yk (t) = y0 +
t0
f(s, yk−1 (s)) ds for k > 1
and is shown to converge to a solution of (1.4). In the contraction mapping approach to the proof,
the right hand side of (1.4) is used to define a mapping on a suitable function space, the mapping
is shown to be a contraction, and thus the mapping has a unique fixed point, which is of course a
solution of (1.4). Both of these ideas will figure into the proof of the main theorem in Chapter 2.
The algorithm that will be developed in Chapter 2 for the solution of certain boundary value
problems is based on a kind of Picard iteration as just described. It is well known that in many
instances Picard iteration performs poorly in actual practice, even for relatively simple differential
equations. A great improvement can sometimes be had by exploiting ideas originated by G. Parker
and J. Sochacki [?], which we will now describe. We begin with the following example, which
illustrates the ideas involved.
Example 1.1.3. Consider the initial value problem
y0 = sin y,
y(0) = π/2.
(1.5)
The function on the right hand side of this autonomous differential equation is globally Lipschitz
on R with Lipschitz constant L = 1, hence by the Picard-Lindel¨of Theorem there exists a unique
solution to the initial value problem defined on some interval J about t = 0. To approximate it by
means of Picard iteration, from the integral equation that is equivalent to (1.5), namely,
π
2
Z t
y(t) = +
sin y(s) ds
0
4
we define the Picard iterates
y0 (t) ≡ π2 ,
yk+1 (t) = π2 +
Z t
0
sin yk (s) ds
for k > 0.
The first few iterates are
y0 (t) =
π
2
y1 (t) = π2 + t
y2 (t) = π2 − cos( π2 + t)
π
2
y3 (t) = +
Z t
cos(sin s) ds,
0
the last of which cannot be expressed in closed form. However, we can avoid this impasse by
introducing additional dependent variables originally so as to eliminate the transcendental function
of the dependent variable that appears on the right hand side of the differential equation in (1.5).
Specifically, we define variables u and v by u = sin y and v = cos y so that y0 = u, u0 = (cos y)y0 = uv,
and v0 = (− sin y)y0 = −u2 , hence the original one-dimensional problem is embedded as the first
component in the three-dimensional problem
y0 = u
y(0) =
u0 = uv
u(0) = 1
v0 = −u2
v(0) = 0 .
π
2
(1.6)
Indeed, let the unique solution to (1.5) be y = σ(t) on some interval J about 0 and let the unique solution to (1.6) be (y, u, v) = (ρ(t), µ(t), ν(t)) on some interval K about 0. By construction (y, u, v) =
(σ(t), sin σ(t), cos σ(t)) solves (1.6) on J, hence we conclude that on J ∩ K the function y = σ(t),
which in the general case we cannot find explicitly, is equal to the function y = ρ(t), which we can
approximate on any finite interval about 0 to any required accuracy. For although the dimension
has increased, now the right hand sides of the differential equations are all polynomial functions so
5
quadratures can be done easily. In particular, the Picard iterates are now
 
π
2
 

y0 (t) = 
1
 
0
 


u
(s)
 k

Z t
  Z t

yk+1 (t) = y0 + yk (s) ds = 
+ 
1
uk (s)vk (s)



 ds.
0
0 
 

0
−u2k (s)
π
2
The first component of the first few iterates are
y0 (t) =
π
2
y1 (t) = π2 + t
y2 (t) = π2 + t
y3 (t) = π2 + t − 16 t 3
1 5
y4 (t) = π2 + t − 16 t 3 + 24
t
1 5
t
y5 (t) = π2 + t − 16 t 3 + 24
1 5
61 7
y6 (t) = π2 + t − 16 t 3 + 24
t − 5040
t
The exact solution to the problem (1.5) is y = σ(t) = 2 arctan(et ), whose Maclaurin series is
1 5
61 7
σ(t) = π2 + t − 61 t 3 + 24
t − 5040
t + O(t 9 ).
It is apparent that the iterative process is in fact generating the Maclaurin series of the first component of the solution of problem (1.6) (which is shown in [?] to hold in general; see Theorem 1.1.4
below).
In practice we are not always able to obtain a closed form expression for the limit of the iterates
yn (t), but must settle for an approximation yn of the actual solution y(t). However, obtaining an
approximate solution with a sufficiently large n is practical since the algorithm is computationally
efficient.
In general the Parker-Sochacki method is an approach to obtaining approximate solutions of
systems of ordinary differential equations. As the example illustrates, the idea is to introduce variables that are equal to various non-polynomial functions that appear in the system of differential
6
equations, as well as variables that are equal to their derivatives, in such a way that those functions
are expressible as solutions of a system of differential equations with polynomial right hand sides.
Moreover, the initial conditions in the original problem force the initial values of the new variables.
Thus we obtain a polynomial initial value problem and, by an argument based on uniqueness of
solutions of initial value problems just like that given in Example 1.1.3, the solution to the original initial value problem is one component. The process guarantees that the iterative integral in
Picard iteration applied to the new, larger system is easy to compute, since the integrand is always
a polynomial function. The method produces Maclaurin series solutions to systems of differential
equations, with the coefficients in either algebraic or numerical form. The following theorem is
stated and proved in [?].
Theorem 1.1.4. Let F = ( f1 , · · · , fn ) : Rn → Rn be a polynomial and y = (y1 , · · · , yn ) : R → Rn .
Consider initial value problem
y0j = fk (y),
y j (0) = α j ,
j = 1, · · · , n
and the corresponding Picard iterates
j = 1, · · · , n
Pj,1 (t) = α j ,
Z t
Pj,k+1 (t) = α j +
0
k = 1, 2, · · · ,
f j (Pk (s)) ds,
j = 1, · · · , n.
Then Pj,k+1 is the kth Maclaurin Polynomial for y j plus a polynomial all of whose terms have degree
greater than k. (Here Pk (s) = (P1,k (s), · · · , Pn,k (s)).)
In [?] the authors address the issue as to which systems of ordinary differential equations can
be handled by this method. The procedure for defining the new variables is neither algorithmic nor
unique. However, with sufficient ingenuity it has been successfully applied in every case for which
the original differential equation is analytic. The following example further illustrates the method,
but in a much more complicated situation.
Example 1.1.5. Consider the initial value problem
y00 = ecos y sint
p 3
y0 ,
y(0) = α,
y0 (0) = γ > 0,
(1.7)
7
where the condition on γ insures that there will be a unique solution on an interval about t = 0. As
usual we form the equivalent system of first order ordinary differential equations by introducing a
new dependent variable x = y0 , obtaining
y0 = x
0
cos y
x =e
(1.8)
p 3
sint
y0 .
For the term ecos y in the x0 equation, if we introduce u = ecos y then u0 = ecos y (− sin y)y0 and (sin y)0 =
(cos y) y0 , which prompts us to introduce v = sin y and w = cos y, so that system (1.8) is now
y0 = x,
3
x0 = u sint x 2 ,
u0 = −uvx,
v0 = wx,
w0 = −vx.
If r = sint, r0 = cost so we introduce s = cost to obtain
y0 = x,
3
x0 = urx 2 ,
u0 = −uvx,
v0 = wx,
w0 = −vx,
r0 = s,
s0 = r.
√
√
1
Finally, to deal with the term ( x)3 we introduce µ = x. Then µ0 = 12 x− 2 x0 so we introduce
√
ρ = 1/ x = µ−1 , for which ρ0 = −µ−2 µ0 = −ρ2 12 ρurµ3 and ultimately obtain the system
y0 = x
y(0) = α
x0 = urµ3
x(0) = γ
u0 = −uvux
u(0) = ecos α
v0 = wx
v(0) = sin α
w0 = −vx
w(0) = cos α
r0 = s
r(0) = 0
s0 = −r
s(0) = 1
µ0 = 12 ρurµ3
µ(0) =
ρ0 = − ur ρ3 µ3
√
ρ(0) = 1/ γ
(1.9)
√
γ
which is well defined since γ > 0. We may now apply Picard iteration to (1.9) without danger of
8
encountering impossible quadratures, obtaining a longer and longer initial segment of the Maclaurin
series of the first component y(t) of the solution, which is the solution of (1.7).
1.2
Boundary Value Problems
A two-point boundary value problem associated to (1.2) on Ω = I × E is the system (1.2)
together with the additional requirement that the solution of (1.2) exist on an interval (a, b) ⊂ I
and satisfy a boundary condition g(y(a), y(b)) = 0 for some mapping g from E into Rn . Our chief
concern will be with problems of the form
y00 = f (t, y, y0 ),
y(a) = α,
y(b) = β.
(1.10)
In contrast with initial value problems of this form, regardless of the regularity of f the problem
(1.10) can have either no solutions, infinitely many solutions, or a unique solution. Consider, for
example, a boundary value problem of the form
y00 = ay0 + by,
y(0) = α,
y(h) = β.
(1.11)
Since all solutions of the differential equation in (1.11) are known explicitly, it is not difficult to
verify that this boundary value problem has a unique solution if and only if either (i) a2 + 4b > 0
√
or (ii) a2 + 4b < 0 and h −a2 − 4b 6= 2kπ for any k ∈ Z. In the remaining cases it has no solution
if (among other possibilities) exactly one of α and β is zero and has infinitely many solutions if
(among other possibilities) α = β = 0.
A relatively general theorem that guarantees that problem (1.10) will possess a unique solution
is the following. It is a special case of Theorem 1.2.2 of [?], where a proof is given.
Theorem 1.2.1. Suppose the function f (t, u, v) = f (t, u) in problem (1.10) is defined on the set
R = {(t, u) : a 6 t 6 b} ⊂ R × R2 and satisfies
a. f is Lipschitz in u = (u, v),
b. f is continuously differentiable, and
c. throughout R the first partial derivatives of f with respect u and v satisfy, for some constant M,
∂f
>0
∂u
and
∂ f 6 M.
∂v 9
Then the boundary problem (1.10) has a unique solution.
When we specialize to linear second order two-point boundary value problems we have the
following well-known result (see, for example, [?]).
Theorem 1.2.2. The second-order linear boundary value problem
y00 = p(t)y0 + q(t)y + r(t),
y(a) = α,
y(b) = β
(1.12)
has a unique solution provided
a. p(t), q(t), and r(t) are continuous on [a, b] and
b. q(t) > 0 on [a, b].
The unique solution can be decomposed as
y(t) = y1 (t) +
β − y1 (b)
y2 (t)
y2 (b)
(1.13)
where y1 (t) is the unique solution of the initial value problem
y00 = p(t)y0 + q(t)y + r(t),
y(a) = α,
y0 (a) = 0
(1.14)
and y2 (t) is the unique solution of the initial value problem
y00 = p(t)y0 + q(t)y,
y(a) = 0,
y0 (a) = 1.
(1.15)
Proof. The existence and uniqueness of the solution of (1.12) is an immediate consequence of
Theorem 1.2.1.
As to the decomposition, as a preliminary let y0 (t) denote the unique solution of (1.12) and, as
indicated in the statement of the theorem, let y1 (t) and y2 ) denote the unique solutions of the intial
value problems (1.14) and (1.15), respectively, which exist by the Picard-Lindel¨of Theorem 1.1.2.
Let yb2 (t) denote any solution (including y2 (t)) of
y00 = p(t)y0 + q(t)y,
y(a) = 0,
10
that is, of (1.15) but with the condition on the derivative at a omitted. Then obviously the function
def
y3 (t) = y0 (t) + yb2 (t) solves the differential equation in (1.12) and satisfies y3 (a) = α + 0 = α. If
additionally yb2 (b) = 0 then y3 (b) = β + 0 = β so that y3 (t) is then a solution of (1.12) as well as
y0 (t), hence by uniqueness of solutions yb2 (t) ≡ 0. Thus y2 (b) cannot be zero, since y2 (t) is certainly
not identically zero, so the function given by (1.13) is well-defined. It obviously solves (1.12).
1.3
Equivalences, Norms, and Contraction Mappings
In this section we collect for reference some known results that will be needed later. We begin
with two equivalences between initial value problems and integral equations.
Lemma 1.3.1. Suppose I is an open interval in R, t0 ∈ I, E is an open set in Rn , and f : I × E → Rn
is a continuous mapping.
a. If J is an open subset of I that contains t0 and η : J → E is continuous and satisfies
Z t
η(t) = y0 +
f(s, η(s)) ds
(1.16)
t0
then η is differentiable and solves the initial value problem
y0 = f(t, y),
y(t0 ) = y0 .
(1.17)
b. Conversely, if J is as in part (a) and η : J → E is differentiable and solves (1.17) then the integral
in (1.16) exists and η satisfies (1.16).
The second equivalence is also known, but is less familiar, so we will include a sketch of the
proof.
Lemma 1.3.2. Suppose I is an open interval in R, t0 ∈ I, E is an open set in R2 , and f : I × E → R
is a continuous function.
a. If h is a positive real number and η is a twice continuously differentiable function on an open
neighborhood of [t0 ,t0 + h] ⊂ I that solves the second order initial value problem
y00 = f (t, y, y0 ),
y(t0 ) = α,
y0 (t0 ) = γ
(1.18)
11
then η solves the integral equation
y(t) = α + γ(t − t0 ) +
Z t
(t − s) f (s, y(s), y0 (s)) ds.
(1.19)
t0
b. Conversely, if η is a continuously differentiable function on an open neighborhood of [t0 ,t0 + h]
that solves (1.19) then η is twice continuously differentiable on an open neighborhood of [t0 ,t0 +
h] and solves (1.18).
Proof. For part (a), suppose η is as hypothesized and s ∈ [t0 ,t0 + h]. Integrating (1.18) from t0 to s
yields
η0 (s) − η0 (t0 ) =
Z s
f (u, η(u), η0 (u)) du
t0
which when integrated from t0 to t yields
0
η(t) − η(t0 ) − η (t0 )(t − t0 ) =
Z t Z s
t0
f (u, η(u), η (u)) du ds.
0
t0
Reversing the order of integration of the iterated integrals on the right hand side gives
0
η(t) − η(t0 ) − η (t0 )(t − t0 ) =
Z t Z t
t0
f (u, η(u), η (u)) ds du
0
u
from which we obtain
0
η(t) = η(t0 ) + η (t0 )(t − t0 ) +
Z t
(t − u) f (u, η(u), η0 (u)) du.
t0
Conversely, if h > 0 and η is a continuously differentiable function on an open neighborhood of
[t0 ,t0 + h] that solves (1.19) then differentiating (1.19) gives
η0 (t) = γ + (t − t) f (t, η(t), η0 (t)) · 1 + (t − t0 ) f (t0 , η(t0 ), η0 (t0 )) · 0
Z t
+
t0
Z t
= γ+
0
∂
∂t [(t − s) f (s, η(s), η (s))] ds
(1.20)
f (s, η(s), η0 (s)) ds.
t0
Since f , η, and η0 are continuous the right hand side is differentiable, we may differentiate again to
obtain η00 (t) = f (t, η(t), η0 (t)), and by (1.19) and (1.20) η also satisfies the initial conditions.
12
Two norms that we will use are the following.
Definition 1.3.3.
a. The sum norm on R2 is the norm defined by
u = |u| + |v|.
v
sum
b. The supremum norm (or just sup norm) on the set C of bounded continuous functions from a
specified subset A (usually an interval) in R into R2 , with respect to the sum norm on R2 , is the
norm defined by
|η|sup = sup{|η(t)|sum : t ∈ A}.
We will also need the concept of a contraction and the Contraction Mapping Theorem.
Definition 1.3.4. Let X be a vector space with norm ||·||. A contraction mapping on X is a mapping
T : X → X for which there exists a constant c with 0 6 c < 1 such that for all x and y in X
||T (x) − T (y)|| 6 c||x − y||.
(1.21)
Theorem 1.3.5 (Contraction Mapping Theorem). If T is a contraction mapping on a commplete
normed vector space X then there exists a unique fixed point of T in X , that is, a unique element x0
of X such that T (x0 ) = x0 . Moreover for any x ∈ X the sequence T n (x) = (T ◦ · · · ◦ T )(x) (n-fold
composition) converges to x0 .
CHAPTER 2: APPROXIMATING SOLUTIONS OF BOUNDARY VALUE PROBLEMS
In this chapter we present a new algorithm for approximating solutions of two-point boundary
value problems and prove a theorem that gives conditions under which it is guaranteed to succeed.
We show how to make the algorithm computationally efficient and demonstrate how the full method
works both when guaranteed to do so and more broadly. In the first section the original idea and
its application are presented. It is illustrated with a number of examples in the second section. In
the third section we show how to modify the same basic idea and procedure in order to apply it
to problems with boundary conditions of mixed type. In the fourth section a different approach to
problems treated in the third section is given which can perform better in practice.
2.1
An Efficient Algorithm for Approximating Solutions of Two-Point Boundary Value Problems
Consider a two-point boundary value problem of the form
y00 = f (t, y, y0 ),
y(a) = α,
y(b) = β .
(2.1)
If f is locally Lipschitz in the last two variables then by the Picard-Lindel¨of Theorem, Theorem
1.1.2 (applied to (2.4) below), for any γ ∈ R the initial value problem
y00 = f (t, y, y0 ),
y(a) = α,
y0 (a) = γ
(2.2)
will have a unique solution on some interval about t = a. Introducing the variable u = y0 we obtain
the first order system that is equivalent to (2.2),
y0 = u
u0 = f (t, y, u)
(2.3)
y(a) = α
u(a) = γ
14
or more succinctly, writing y = (y, u) ∈ R2 , y0 = (α, γ), and f : R × R2 → R2 : (t, y) 7→ (u, f (t, y)),
y0 = f(t, y)
(2.4)
y(a) = y0 ,
which by Lemma 1.3.1 is equivalent to
Z t
y(t) = y(a) +
f(s, y(s)) ds.
(2.5)
a
The Picard iterates based on (2.5) are known to converge to the unique solution of (2.2).
The boundary value problem (2.1) will have a solution if and only if there exists γ ∈ R such
that (i) the maximal interval of existence of the unique solution of (2.2) contains the interval [a, b],
and (ii) the unique solution y(t) of (2.2) satisfies y(b) = β. But we can identify such a γ by applying
Lemma 1.3.2 to (2.2) to obtain the equivalent integral equation
y(t) = α + γ(t − a) +
Z t
(t − s) f (s, y(s), y0 (s)) ds,
(2.6)
a
which we then solve, when evaluated at t = b, for γ:
γ=
1
b−a
Z b
0
β − α − (b − s) f (s, y(s), y (s)) ds
(2.7)
a
The key idea in the new method is that in the Picard iteration scheme based on (2.5) we use (2.7) to
also iteratively obtain successive approximations to the value of γ in (2.2), if it exists, using some
initial choice γ[0] of γ, say γ[0] = (β − α)/(b − a), the average slope of the solution to (2.1) on the
interval [a, b]. Thus the iterates are
y[0] (t) ≡ α
u[0] (t) ≡
β−α
b−a
γ[0] =
β−α
b−a
(2.8a)
15
and
y[k+1] (t) = α +
Z t
u[k] (s) ds
a
u
[k+1]
[k]
Z t
(t) = γ +
f (s, y[k] (s), u[k] (s)) ds
(2.8b)
a
Z b
1
[k+1]
[k]
[k]
β − α − (b − s) f (s, y (s), u (s)) ds .
γ
=
b−a
a
This gives the following algorithm for approximating solutions of (2.1), in which we have changed
the update of γ in (2.8) to incorporate it into the update of u. (See Section 2.3 for a similar application
of these ideas to problems with boundary conditions of mixed type.)
Algorithm 2.1.1. To approximate the solution of the boundary value problem
y00 = f (t, y, y0 ),
y(a) = α,
y(b) = β
(2.9)
iteratively compute the sequence of functions on [a, b]
y[0] (t) ≡ α
(2.10a)
[0]
u (t) ≡
β−α
b−a
and
[k+1]
Z t
u[k] (s) ds
0
Zt
Z h
1
[k+1](t)
[k]
[k]
u
=
β − α − (h − s) f (y (s), u (s)) ds +
f (y[k] (s), u[k] (s)) ds.
b−a
0
0
y
(t) = α +
(2.10b)
We will now state and prove a theorem that gives conditions guaranteeing that the problem
(2.9) has a unique solution, then prove that the iterates in Algorithm 2.1.1 converge to it. We will
need the following technical lemma.
Lemma 2.1.2. Let E ⊂ R × R2 be open and let f : E → R : (t, y, u) 7→ f (t, y, u) be Lipschitz in
y = (y, u) on E with Lipschitz constant L with respect to absolute value on R and the sum norm on
R2 . Then
F : E → R2 : (t, y, u) 7→ (u, f (t, y, u))
is Lipschitz in y with Lipschitz constant 1 + L with respect to the sum norm on R2 .
16
Proof. For (t, y1 ) and (t, y2 ) in E,

 

u
u
1
2

 

|F(t, y1 ) − F(t, y2 )|sum = 
−

f (t, y2 , u2 ) f (t, y1 , u1 )
sum
= |u1 − u2 | + | f (t, y1 , u1 ) − f (t, y2 , u2 )|
6 |u1 − u2 | + |y1 − y2 | + L |y1 − y2 |sum
= |y1 − y2 |sum + L |y1 − y2 |sum
= (1 + L) |y1 − y2 |sum . Theorem 2.1.3. Let f : [a, b] × R2 → R : (t, y, u) 7→ f (t, y, u) be Lipschitz in y = (y, u) with Lipschitz
constant L with respect to absolute value on R and the sum norm on R2 . If 0 < b − a < (1 + 32 L)−1
then for any α, β ∈ R the boundary value problem
y00 = f (t, y, y0 ),
y(a) = α,
y(b) = β
(2.11)
has a unique solution.
Proof. By Lemma 1.3.1 a twice continuously differentiable function η from a neighborhood of
[a, b] into R solves the ordinary differential equation in (2.11) if and only the mapping (y(t), u(t)) =
(η(t), η0 (t)) from that neighborhood into R2 solves the integral equation (1.16),




y(t) y(0)

=
+
u(t)
u(0)

Z t

a


u(s)

 ds.
f (s, y(s), u(s))
The discussion leading up to (2.7) shows that η meets the boundary conditions in (2.11) if and only
if η(a) = α and η0 (a) = γ where γ is given by (2.7), now with (y(t), u(t)) = (η(t), η0 (t)). In short
the boundary value problem (2.11) is equivalent to the integral equation




α
y(t) 

h
i +

=
R
b
1
u(t)
b−a β − α − a (b − s) f (s, y(s), u(s)) ds

Z t

a


u(s)

 ds.
f (s, y(s), u(s))
(2.12)
If y(t) = (y(t), u(t)) is a bounded continuous mapping from a neighborhood U of [a, b] in R into
17
R2 then the right hand side of (2.12) is well defined and defines a bounded continuous mapping
from U into R2 . Thus letting C denote the set of bounded continuous mappings from a fixed
open neighborhood U of [a, b] into R2 , a twice continuously differentiable function η on U into
def
R solves the boundary value problem (2.11) if and only if η = (η, η0 ) is a fixed point of the operator
T : C → C defined by
 




Z
t
α
u(s)
y




T   (t) = 
h
i + 
 ds,
R
b
1
a
β
−
α
−
(b
−
s)
f
(s,
y(s),
u(s))
ds
u
f
(s,
y(s),
u(s))
a
b−a
which we abbreviate to



T (y)(t) = 
Z t
α

h
i +
R
b
1
b−a β − α − a (b − s) f (s, y(s)) ds
F(s, y(s)) ds
(2.13)
a
by defining F : [a, b] × R2 → R by F(t, y, u) = (u, f (t, y, u)).
The vector space C equipped with the supremum norm is well known to be complete. Thus by
the Contraction Mapping Theorem, Theorem 1.3.5, the theorem will be proved if we can show that
T is a contraction on C . To this end, let η and µ be elements of C . Let ε = max{t − b : t ∈ U}.
Then for any t ∈ U
|(T η)(t) − (T µ)(t)|sum

 

α
α
 


6 
i − 
i
h
h
Rb
1 β − α − R b (b − s) f (s,η
1
η(s))ds
µ
β
−
α
−
(b
−
s)
f
(s,µ
(s))ds
b−a
a
a
b−a
sum
Z t
η(s)) − F(s,µµ(s)) ds
+ F(s,η
a
Z b
sum
Z t
1
η(s)) − f (s,µµ(s))| ds + |F(s,η
η(s)) − F(s,µµ(s))|sum ds
(b − s)| f (s,η
b−a a
a
Z t
Z b
(1)
1
η(s) −µµ(s)|sum ds
η(s) −µµ(s)|sum ds + (1 + L)|η
(b − s)L|η
6
b−a a
a
Z b
Z t
1
η −µµ||sup ds + (1 + L)||η
η −µµ||sup ds
6
(b − s)L||η
b−a a
a
Z b
1
η −µµ||sup
6
(b − s)L ds + (1 + L)((b − a) + ε) ||η
b−a a
6
η −µµ||sup
= [ 21 (b − a)L + (1 + L)((b − a) + ε)]||η
18
where for inequality (1) Lemma (2.1.2) was applied in the second summand.
η − µ ||sup and T is a contraction provided
Thus ||T η − T µ ||sup 6 (1 + 32 L)((b − a) + ε))||η
(1 + 23 L)((b − a) + ε) < 1, equivalently, provided (1 + 32 L)(b − a) < 1 − (1 + 23 L)ε. But U can be
chosen arbitrarily, hence (1 + 23 L)ε can be made arbitrarily small, giving the sufficient condition of
the theorem.
The second statement in the Contraction Mapping Theorem (Theorem 1.3.5) and the fact that
repeated composition of the mapping T in the proof of the theorem generates Picard iterates guarantees that the iterates defined by (2.10) will converge to the unique solution of (2.11) that the
theorem guarantees to exist. Thus we have the following result.
Theorem 2.1.4. Let f : [a, b] × R2 → R : (t, y, u) 7→ f (t, y, u) be Lipschitz in y = (y, u) with Lipschitz
constant L with respect to absolute value on R and the sum norm on R2 . If 0 < b − a < (1 + 32 L)−1
then for any α, β ∈ R the iterates generated by Algorithm 2.1.1 converge to the unique solution of
the boundary value problem
y00 = f (t, y, y0 ),
y(a) = α,
y(b) = β
(2.14)
guaranteed by Theorem 2.1.3 to exist.
Remark 2.1.5. The examples in the following section will show that the use of Algorithm 2.1.1 is
by no means restricted to problems for which the hypotheses of Theorem 2.1.4 are satisfied. It will
in fact give satisfactory results for many problems that do not satisfy those hypotheses.
We note that because the convergence is in the supremum norm at any step in the algorithm the
approximation is a function that is uniformly very close to the exact solution on the entire interval
of interest. We also remark than when Algorithm 2.1.1 is applied to a problem whose solution is
known, in order to study its performance, it can be convenient to keep track of the updated values
of the approximations γ[k] of γ, which means using the iteration (2.8) in place of the iteration (2.10)
stated in the algorithm.
When the right hand side of the equation (2.14) is a polynomial function then the integrations
that are involved in implementing Algorithm 2.1.1 can always be done, and done efficiently. But
when the right hand side of equation (2.14) is not a polynomial then in a manner analogous to what
19
happened with equation (1.5) the Picard iterates can lead to impossible integrations. In this situation
we use the auxiliary variable method of Parker and Sochacki in conjunction with Algorithm 2.1.1
to obtain a computationally efficient method for approximating the solution. Here are the details.
Supposing that the right hand side in (2.14) is not a polynomial in y and y0 , in the usual way
introduce the new dependent variable u = y0 to obtain the equivalent system
y0 = u
(2.15)
u0 = f (t, y, u).
find functions h1 (t, y, u), . . . , hr (t, y, u) and polynomials P0 , P1 , . . . , Pr in r + 3 indeterminates such
that
(i) f (t, y, u) = P0 (t, y, u, h1 (t, y, u), . . . , hr (t, y, u)) and
(ii) if (y(t), u(t)) solves (2.15) then v j (t) = h j (t, y(t), u(t)) solves v0j = Pj (t, y, u, v1 , . . . , vr ), v j (t0 ) =
h j (t0 , y0 , u0 ), 1 6 j 6 r.
y0 = f(t, y),
(2.16)
where
 
y
 
 
u
 
 

y=
v1 
.
.
.
 
vr

and
u





P0 (t, y, u, v1 , . . . , vr )





f(t, y) = P1 (t, y, u, v1 , . . . , vr )
,


.


..




Pr (t, y, u, v1 , . . . , vr )
If γ is the value at t = a of the solution of the boundary value problem (2.14) then we will
still have expression (2.7) for γ but with f (t, y, y0 ) replaced by P0 (t, y, u, v1 , . . . , vr ). Assuming the
de f
value of (v j )0 = h1 (a, α, β−α
b−a ) is forced by all j by the original initial conditions y(a) = α and
y0 (a) = γ, we have the usual initial value problem, the first component y(t) of whose solution solves
the original boundary value problem (2.14).
We can proceed as before with picard iterates for (2.16), updating the estimate for γ at each step.
For although the dimension has increased, now the right hand sides of the differential equations are
all polynomial functions so quadratures can be done easily. Specifically, the initialization is
20
y[0] (t) ≡ α
u[0] ≡
β−α
b−a
v1 (t) ≡ (v1 )0
..
.
..
.
vr (t) ≡ (vr )0
γ[0] ≡
β−α
b−a
and the recursion is

y[k+1] (t)





u[k] (s)
α

 



 [k+1]   [k] 


[k]
[k]
[k]
[k]
u





(t)
γ
P
(s,
y
(s),
u
(s),
v
(s),
.
.
.
,
v
(s)))
r
0
1

 
 Z t

 [k+1]  



[k]
[k]
v




 ds
[k]
[k]
=
+
(t)
(v
)
P
(s,
y
(s),
u
(s),
v
(s),
.
.
.
,
v
(s))
r
1
0
1
 1
 



1
a 





..
..

  .. 



  . 


.
.

 



[k+1]
[k]
[k]
[k]
[k]
vr (t)
(vr )0
Pr (s, y (s), u (s), v1 (s), . . . , vr (s))
Z b
[k]
[k]
[k]
[k]
1
γk+1 = b−a β − α − (b − s)P0 (s, y (s), u (s), v1 (s), . . . , vr (s)) ds .
(2.17)
a
As before we can change the update of the estimate of γ to incorporate it into the update of
u[k] (t).
2.2
Examples
In this section we implement the method to solve some linear and nonlinear problem and
compare the result with the exact solution. We also show how the method behaves for problems
with no solutions or with infinity many solutions.
Example 2.2.1. The second order linear two-point boundary value problem
y00 = −y,
y(0) = 1,
y( π4 ) =
√
2
21
has the unique solution
y(t) = cost + sint = 1 + t −
1 2 1 3 1 4 1 5
t − t + t + t + O(t 6 ).
2!
3!
4!
5!
Since f (t, y, y0 ) = −y has Lipschitz constant 1, Theorem 2.1.4 guarantees existence of a unique
solution on intervals of length up to 52 , but we will see that the algorithm works well for the much
longer interval in this example. Using the recursion formulas (2.8) in Algorithm 2.1.1 instead of
(2.10) so that we have a record of the ending value of γ, the iterates are
y[k+1] = 1 +
Z t
u[k] ds
0
[k+1]
Z t
= γ[k] − y[k] ds
0
Z π
√
[k+1]
4
4 π
[k+1]
2−1+
−s y
ds .
γ
=
π
4
0
u
Maple code for n = 5 iterations is
restart;
n := 5;
a := 0;
b := Pi/4;
alpha := 1;
beta := sqrt(2);
h := b - a;
y[0] := alpha;
gamma[0] := (beta-alpha)/h;
u[0] := gamma[0];
for k from 0 to n-1 do
y[k+1] := alpha+int(u[k], t = a .. t);
u[k+1] := gamma[k]-int(y[k], t = a .. t);
gamma[k+1] := (beta-alpha+int((b-t)*y[k+1],t=a..b))/h;
end:
Exact:=cos(t)+sin(t);
Error := abs(evalf(y[n]-Exact));
plot(Error, t = 0 .. h);
N:=degree(y[n+1],t);
The exact solution y(t), the approximations y[k] (t), and the approximations γ[k] to the exact
22
value γ = 1, with coefficients rounded to five decimal places, are
γ[0] = 0.92009, γ[1] = 0.97431, γ[2] = 0.99450, γ[3] = 0.99840, γ[4] = 0.99965, γ[5] = 0.99990,
which are tending monotonically to the exact value γ = 1 and
y[0] = 1
y[1] = 1 + 0.52739t
y[2] = 1 + 0.92009t − 0.50000t 2
y[3] = 1 + 0.97431t − 0.50000t 2 − 0.08790t 3
y[4] = 1 + 0.99450t − 0.50000t 2 − 0.15335t 3 + 0.04167t 4
y[5] = 1 + 0.99840t − 0.50000t 2 − 0.16239t 3 + 0.04167t 4 + 0.00439t 5
y = 1 + 1.00000t − 0.50000t 2 − 0.16667t 3 + 0.04167t 4 + 0.00833t 5
A plot of the error in the final approximation y[5] (t) is shown in Figure 2.1. The qualitative features
of the plot are as expected, since we are forcing agreement of y[k] (0) with α, we expect the Maclaurin
polynomial agreement to fall off as t moves away from t = 0, but then the approximation improves
approaching t = b because agreement of y[k] (b) with β is forced to be good by the increasingly
accurate approximation of γ by γ[k] .
The maximum error of the fifth approximation is ||y − y[5] ||sup ≈ 0.00037. When the number of
iterations is increased from five to eight the error plot looks qualitatively the same but the maximum
error drops dramatically to about 5.6 × 10−5 .
Example 2.2.2. This example illustrates the effect on the rate of convergence by examining a boundary value problem with the same differential equation and the same solution as in Example 2.2.1 but
on an interval twice as long:
y00 = −y,
y(0) = 1,
y(π/2) = 1.
23
Figure 2.1: Error plot |yexact (t) − y[5] (t)| for Example 2.2.1.
After five iterations we obtain, rounding coefficients to five decimal places, γ = 0.98674 and
y[5] = 1 + 0.94689t − 0.50000t 2 − 0.13090t 3 + 0.04167t 4 − 0.00000t 5 .
The error plot looks qualitatively the same as that in Figure 2.1 but the maximum error is now
||y − y[5] ||sup ≈ 0.023. When the number of iterations is increased from five to eight the maximum
error drops to about 0.0023.
Example 2.2.3. In this example we keep the same ordinary differential equation but choose boundary conditions for which there is no solution:
y00 = −y,
y(0) = 1,
y(π) = 1.
As the number n of iterations is increased the last function computed, y[n] (t), tends to blow up away
from the endpoints a = 0 and b = π in the sense that ||y[n] ||sup becomes increasingly large. For
example, with n = 5, ||y[5] ||sup ≈ 7 and with n = 8, ||y[8] ||sup ≈ 13.
24
Example 2.2.4. In the last of this series of examples we keep the same ordinary differential equation
but choose boundary conditions in such a way that there are infinitely many solutions:
y00 = −y,
y(0) = 1,
y(2π) = 1.
A one-parameter family of solutions is yc (t) = c cost. As in the previous example, as the number n
of iterations is increased ||y[n] ||sup becomes increasingly large. For example, with n = 5, ||y[5] ||sup ≈
230 and with n = 8, ||y[8] ||sup ≈ 235, 000.
In the following examples the right hand side of the differential equation is non-polynomial so
we incorporate the Parker-Sochacki method of auxiliary variables in order to insure the Algorithm
2.1.1 is computationally feasible.
Example 2.2.5. Consider the boundary value problem
y00 = − sin y,
y(0) = 1,
y(1) = 0.
(2.18)
As with Examples 2.2.1 and 2.2.2, here the length of the interval [a, b] = [0, 1] is much greater than
the length on which Theorems 2.1.3 and 2.1.4 guarantee existence of a unique solution to which
iterates generated by Algorithm 2.1.1 must converge. The Lipschitz constant of f (t, y, y0 ) = − sin y
is L = 1 so the theorems guarantee a result only for times up to 25 . In fact the solution is unique and
is given by
2et−1
y(t) = k arccos 2(t−1)
,
e
+1
−1
2e−1
k = arccos −2
.
e +1
In order to effectively apply Algorithm 2.1.1 to (2.18), after introducing the variable u = y0 so as to
obtain the first order system y0 = u, u0 = − sin y (the (2.15) in the discussion at the end of the previous
section), we must also follow the procedure detailed in the text following (2.15) and illustrated in
the discussion surrounding Examples 1.1.3 and 1.1.5.
It is apparent that we must introduce a variable v = sin y, and since the derivative of sine is
25
cosine, in addition a variable w = cos y. Thus in this instance (2.16) is
y0 = u
u0 = −v
v0 = uw
w0 = −uv
with initial conditions
y(0) = 1,
u(0) = γ,
v(0) = sin 1,
w(0) = cos 1,
a system on R4 for which the y-component is the solution of the boundary value problem (2.2.5)).
Thus in this instance
a = 0,
P0 (y, u, v, w) = −v,
b = 1,
α = 1,
P1 (y, u, v, w) = uw,
β = 0,
P2 (y, u, v, w) = −uv
so that (??) is
y[0] (t) ≡ 1
u[0] (t) ≡ −1
v[0] (t) ≡ sin 1
(2.19a)
w[0] (t) ≡ cos 1
γ[0] = −1
and

y[k+1] (t)


1


u[k] (s)





 



 

R
1
Z
[k]

 u[k+1] (t)  −1 + (1 − s)v[k] (s) ds
t
0
 −v (s) 


 
 ds,
+ 
=

 [k+1]  


0  [k]
sin 1
(t)  
 u (s)w[k] (s) 

v




 
−u[k] (s)v[k] (s)
cos 1
w[k+1] (t)
(2.19b)
where, as indicated just after (??), we have shifted the update of γ and incorporated it into the update
of u[k] (t).
26
The exact solution y(t), the approximations y[k] (t), and the approximations γ[k] to the exact
value γ = 1, with coefficients rounded to five decimal places, are
γ[0] = 1, γ[1] = −0.57926, γ[2] = −0.66931, γ[3] = −0.68248, γ[4] = −0.67738, γ[5] = −0.68131
and
y[1] = 1
y[2] = 1 − 1.0t
y[3] = 1 − 0.57926t − 0.42074t 2
y[4] = 1 − 0.66931t − 0.42074t 2 + 0.09005t 3
y[5] = 1 − 0.68248t − 0.42074t 2 + 0.05216t 3 + 0.03925t 4 + 0.01180t 5
y = 1 − 0.74853t − 0.28504t 2 − 0.01997t 3 + 0.03610t 4 + 0.01975t 5
A plot of the error in the final approximation y[5] (t) is shown in Figure 2.2.
In the next example we treat a problem in which the function f on the right hand side fails to
be Lipschitz in y yet Algorithm 2.1.1 nevertheless performs well.
Example 2.2.6. Consider the boundary value problem
y00 = −e−2y ,
y(0) = 0,
y(1.2) = ln cos 1.2 ≈ −1.015 123 283,
(2.20)
for which the unique solution is y(t) = ln cost, yielding γ = 0.
Introducing the dependent variable u = y0 to obtain the equivalent first order system y0 = u,
u0 = e−y and the variable v = e−2y to replace the transcendental function with a polynomial we
27
Figure 2.2: Error plot |yexact (t) − y[5] (t)| for Example 2.2.5.
obtain the expanded system
y0 = u
u0 = −v
v0 = −2uv
with initial conditions
y(0) = 0,
u(0) = γ,
v(0) = 1,
a system on R3 for which the y-component is the solution of the boundary value problem (2.20).
Thus in this instance
a = 0,
b = 1.2,
P0 (y, u, v) = −v,
α = 0,
β = ln cos 1.2
P1 (y, u, v) = −2uv
28
so that (??) is
y[0] (t) ≡ 0
u[0] (t) ≡
ln cos 1.2
1.2
(2.21a)
[0]
v (t) ≡ 1
γ[0] =
ln cos 1.2
1.2
and

y[k+1] (t)




u[k] (s)

0

 Z t
 



 


R
1.2
u[k+1] (t) =  1 (β − α +
 +  −v[k] (s)  ds,
[k]

  1.2


0 (1.2 − s)v (s) ds
0 


 

1
v[k+1] (t)
−2u[k] (s)v[k] (s)
(2.21b)
where, as indicated just after (??), we have shifted the update of γ and incorporated it into the update
of u[k] (t). The first eight iterates of γ are:
γ[1] = −0.24594, γ[2] = 0.16011, γ[3] = 0.19297, γ[4] = 0.04165,
γ[5] = −0.04272, γ[6] = −0.04012, γ[7] = −0.00923, γ[8] = 0.01030,
The maximum error in the approximation is ||yexact − y[8] ||sup ≈ 0.0065.
Example 2.2.7. Consider the boundary value problem
1
3
0
32 + 2t − yy ,
y =
8
00
y(1) = 17,
y(3) =
43
3
(2.22)
whose right hand side is not autonomous and does not satisfy a global Lipschitz condition. The
unique solution is y(t) = t 3 + 16
t . Using the iterates (2.8) in Algorithm 2.1.1 in order to keep track
of the iterates γ[k] we have
y[0] (t) ≡ 17
4
3
4
γ[0] = −
3
u[0] (t) ≡ −
(2.23a)
29
and

y[k+1] (t)




  17 

 =  +
u[k+1] (t)
γ[k]
4 1
γ[k+1] = − −
3 2
Z 3
1

Z t

1
u[k] (s)


 ds,

4 + 41 s3 − y[k] (s)u[k] (s)
(2.23b)
1
(3 − s)(4 + s3 − y[k] (s)u[k] (s) ds.
4
After n = 12 iterations, γ[12] = −8.443, whereas the exact value is γ = − 76
9 ≈ −8.444. As
illustrated by the error plot, Figure 2.3 the maximum error in the twelfth approximating function is
||yexact − y[12] ||sup ≈ 0.0012.
Figure 2.3: Error plot |yexact (t) − y[12] (t)|for Example 2.2.7.
2.3
Mixed Type Boundary Conditions
The ideas developed at the beginning of Section 2.1 can also be applied to two-point boundary
value problems of the form
y00 = f (t, y, y0 ),
y0 (a) = γ,
y(b) = β ,
(2.24)
30
so that in equation (2.2) the constant γ is now known and α = y(a) is unknown. Thus in (2.6) we
evaluate at t = b but now solve for α instead of γ, obtaining in place of (2.7) the expression
α = β − γ(b − a) −
Z b
(b − s) f (s, y(s), y0 (s)) ds.
a
In the Picard iteration scheme we now successively update an approximation of α starting with
some initial value α0 . The iterates are
y[0] (t) ≡ α0
u[0] (t) ≡ γ
(2.25a)
α[0] = α0
and
y[k+1] (t) = α[k] +
Z t
u[k] (s) ds
a
[k+1]
u
Z t
(t) = γ +
f (s, y[k] (s), u[k] (s)) ds
(2.25b)
a
α
[k+1]
= β − γ(b − a) −
Z b
(b − s) f (s, y[k] (s), u[k] (s)) ds.
a
We obtain in analogy with Algorithm 2.1.1 the following algorithm for approximating solutions of
(2.24), in which we have changed the update of α in (2.25) to incorporate it into the update of y.
Algorithm 2.3.1. To approximate the solution of the boundary value problem
y00 = f (t, y, y0 ),
y0 (a) = γ,
y(b) = β
(2.26)
select an initial value α0 and iteratively compute the sequence of functions on [a, b]
y[0] (t) ≡ α0
(2.27a)
[0]
u (t) ≡ γ
and
y[k+1] (t) = β − γ(b − a) −
Z b
(b − s) f (s, y[k] (s), u[k] (s)) ds +
0
a
[k+1](t)
u
Z t
= γ+
[k]
Z t
u[k] (s) ds
(2.27b)
[k]
f (s, y (s), u (s)) ds.
a
An exact analogue of Theorem 2.1.3 but for the boundary conditions in (2.26) can be proved
exactly as Theorem 2.1.3 was proved, by means of a contraction mapping argument. Thus exactly
31
as before the second statement in the Contraction Mapping Theorem (Theorem 1.3.5) and the fact
that repeated composition of the mapping T in the proof of these theorems generates Picard iterates
guarantees that the iterates defined by (2.27) will converge to the unique solution of (2.26) that the
analogue of Theorem 2.1.3 guarantees to exist. Thus we have the following result analogous to
Theorem 2.1.4.
Theorem 2.3.2. Let f : [a, b] × R2 → R : (t, y, u) 7→ f (t, y, u) be Lipschitz in y = (y, u) with Lipschitz
constant L with respect to absolute value on R and the sum norm on R2 . If 0 < b − a < (1 + 32 L)−1
then for any β, γ ∈ R the iterates generated by Algorithm 2.3.1 converge to the unique solution of
the boundary value problem
y00 = f (t, y, y0 ),
y0 (a) = γ,
y(b) = β
guaranteed by the analogue of Theorem 2.1.3 to exist.
Example 2.3.3. Consider the boundary value problem
y00 = −y0 e−y ,
y0 (0) = 1,
y(1) = ln 2
(2.28)
The exact solution is y(t) = ln(1 +t), for which y(0) = ln 1 = 0. Introducing the dependent variable
u = y0 as always to obtain the equivalent first order system y0 = u, u0 = e−y and the variable v = e−y
to replace the transcendental function with a polynomial we obtain the expanded system
y0 = u
u0 = −uv
v0 = −uv
with initial conditions
y(0) = α,
u(0) = γ,
v(0) = e−α ,
a system on R3 for which the y-component is the solution of the boundary value problem (2.28).
32
Thus in this instance
a = 0,
b = 1,
β = ln 2,
g0 (y, u, v) = −uv,
γ=1
g1 (y, u, v) = −uv
so that, making the initial choice α0 = 1, the analogue of (??) is
y[0] (t) ≡ α0
u[0] (t) ≡ 1
(2.29a)
v[0] (t) ≡ e−α0
and

 



R1
[k+1] (t)
[k] (s)v[k] (s) ds
[k] (s)
y
ln
2
−
1
+
(1
−
s)u
u
0

 
 Z t


 



u[k+1] (t) = 
 + −u[k] (s)v[k] (s) ds,
1

 



0 

 


v[k+1] (t)
v[0]
−u[k] (s)v[k] (s)
(2.29b)
where we have shifted the update of α and incorporated it into the update of y[k] (t). We list the first
eight values of α to show the rate of convergence to the exact value 0.
2.4
α[1] = −0.00774,
α[2] = 0.11814,
α[3] = 0.07415,
α[4] = 0.08549,
α[5] = 0.08288,
α[6] = 0.08339,
α[7] = 0.08330,
α[8] = 0.08331
Mixed Type Boundary Conditions: A Second Approach
In actual applications Algorithm 2.3.1 converges relatively slowly because of the update at
every step of the estimate of the initial value y(a) rather than of the initial slope y0 (a). A different
approach that works better in practice with a problem of the type
y00 = f (t, y, y0 ),
y0 (a) = γ,
y(b) = β ,
(2.30)
is to use the relation
y0 (b) = y0 (a) +
Z b
a
f (s, y(s), y0 (s)) ds
(2.31)
33
between the derivatives at the endpoints to work from the right endpoint of the interval [a, b], at
which the value of the solution y is known but the derivative y0 unknown. That is, assuming that
(2.30) has a unique solution and letting the value of its derivative at t = b be denoted δ, it is also the
unique solution of the initial value problem
y00 = f (t, y, y0 ),
y(b) = β,
y0 (b) = δ .
(2.32)
We introduce the new dependent variable u = y0 to obtain the equivalent system
y0 = u
u0 = f (t, y, u)
with initial conditions y(b) = β and y0 (b) = δ and apply Picard iteration based at t = b, using (2.31)
with y0 (a) = γ to update the approximation of y0 (b) are each step. Choosing a convenient initial
estimate δ0 of δ, the successive approximations of the solution of (2.32), hence of (2.30), are given
by
y[0] (t) ≡ β
u[0] (t) ≡ δ0
(2.33a)
δ[0] = δ0
and
y
[k+1]
Z t
(t) = β +
u[k] (s) ds
b
u
[k+1]
[k]
Z t
(t) = δ +
f (s, y[k] (s), u[k] (s)) ds
(2.33b)
b
δ[k+1] = γ +
Z b
f (s, y[k] (s), u[k] (s)) ds.
a
This gives the following alternative algorithm for approximating solutions of (2.30).
Algorithm 2.4.1. To approximate the solution of the boundary value problem
y00 = f (t, y, y0 ),
y0 (a) = γ,
y(b) = β
(2.34)
select an initial value δ0 and iteratively compute the sequence of functions on [a, b] given by (2.33).
34
By the theorem mentioned in Section 2.3 that is the analogue of Theorem 2.1.3 we obtain
the analogue of Theorems 2.1.4 and 2.3.2 that guarantees convergence of the iterates to the unique
solution of (2.34).
Example 2.4.2. Reconsider the boundary value problem (2.28) of Example 2.3.3,
y00 = −y0 e−y ,
y0 (0) = 1,
y(1) = ln 2
(2.35)
with exact solution y(t) = ln(1 + t). The relation (2.31) is
0
y (1) = 1 −
Z 1
y0 (s)e−y(s) ds.
(2.36)
0
Exactly as in Example 2.3.3 we set u = y0 and v = e−y to obtain the expanded system
y0 = u
u0 = −uv
v0 = −uv
but now with initial conditions
y(1) = ln 2,
u(1) = δ,
v(1) = 12 ,
a system on R3 for which the y-component is the solution of the boundary value problem (2.35).
Making the initial choice δ0 = 1 (assuming that the slope from one endpoint to the next will not
change too drastically) and using (2.36) for the update of δ, (2.33) as applied to the expanded
system with the auxiliary variable v is
y[0] (t) ≡ ln 2
u[0] (t) ≡ 1
(2.37a)
v[0] (t) ≡
1
2
δ[0] = 1
35
and

  


[k+1] (t)
[k] (s)
y
u
ln
2

   Z t


  


u[k+1] (t) =  δ[k]  + −u[k] (s)v[k] (s) ds,

  


0 

  

1
[k] (s)v[k] (s)
v[k+1] (t)
−u
2
δ[k+1] = 1 −
Z 1
u[k] (s)v[k] (s) ds.
0
The exact value of δ = y0 (1) is 21 . The first eight values of δ[k] are:
δ[0] = −0.50000, δ[1] = 0.41667, δ[2] = 0.50074,
δ[3] = 0.51592, δ[4] = 0.49987, δ[5] = 0.49690,
δ[6] = 0.50003, δ[7] = 0.50060, δ[8] = 0.50000
(2.37b)
CHAPTER 3: COMPARISON OF METHODS
We will compare the method presented in Chapters 1 and 2 against the methods are commonly
used in the literature. These are the Shooting Method using Fourth Order Runge-Kutta Approximation, the Finite Difference Method, the Adomian Decomposition Method, and the Power Series
Method.
To show the efficacy and applicability of our method, we will compare all these methods on a
linear boundary value problem and then a non-linear boundary value problem.
3.1
Linear Cases
A linear second order two-point boundary value problem, namely that
y00 = f (t, y, y0 )
y(a) = α
y(b) = β
is linear if f (t, y, u) = q(t)y + p(t)u + r(t). Theorem (1.2.2) guarantees the unique solution for the
second-order linear boundary value problem.
y00 = −y
y(0) = 1
(3.1)
y(1) = 0
We will compare methods applied to (3.1), whose exact solution is (3.2),
y(t) = cost −
cos 1
sint
sin 1
(3.2)
37
3.1.1
Runge Kutta with Shooting Method
The method is based on the decomposition (1.13) of the solution of (1.12) given in Theorem
1.2.2. The unknown solution of the boundary value problem (1.12) can be expressed as a linear
combination of the solutions of the initial value problems (1.14)) and (1.15), each of which, after
it has been expressed as a first order system, can be approximated by a numerical scheme, in this
case the fourth order Runge-Kutta method, using the approximate values at the right endpoint to
approximate the second coefficient (β − y1 (b))/y2 (b).
A key point is that there are parameter choices to be made in applying this method. In particular, the number n of partition points of the interval [a, b] must be selected in advance. Another
important point that should be made is that the output of this method is not a function, as with our
algorithm, but a sequence of n + 1 points that approximate the values of the unknown solutions at
n + 1 t-values. We are able to compute the error only at these n + 1 points and then have linearly
interpolated between them.
Applying this method requires approximating the solutions to the initial value problems
y00 = −y
y(0) = 1
(3.3)
y0 (0) = 0
and
y00 = −y
y(0) = 0
(3.4)
y0 (0) = 1
If y1 (t) denotes the solutions to equation (3.3) and y2 (t) denotes the solutions to equation (3.4),
then
y(t) = y1 (t) +
β − y1 (1)
y2 (t)
y2 (1)
is the unique solution to the boundary value problem (3.1).
A plot of the error in the final approximation is shown in below
38
3.1.2
Finite difference method
This method reduced a differential equation to an algebraic equation by replacing the derivatives by their finite difference approximations. Once the number n of subintervals into which [a, b]
is partitioned the approximation technique leads to an (n − 1) × (n − 1) system of linear equations
whose solution gives as output a sequence of n + 1 points that approximate the values of the unknown solutions at n + 1 t-values. The parameter choice that we need to made in advance is the
number n of partition points of the interval [a, b]. Then we can set the mesh size h =
b−a
n
and then
nodes ti = a + ih. Similar to the “Runge-Kutta“ the output of this method is not a function but a
sequence of n + 1 points that approximate the values of the unknown solutions at n + 1 t-values. We
are able to compute the error only at these n + 1 points and then have linearly interpolated between
them.
y00 = f (t, y, y0 ),
y(a) = α,
y(b) = β,
a≤t ≤b
(3.5)
Bounadary Condition
To applying the method to the equation (3.1)first subdivide the interval [0, 1] into n equal subintervals. The nodes of this subdivision are ti = ih, i = 0, · · · , n where h =
b−a
n .
We define yi = y(ti ) and
from the boundary condition we have y0 = 1 and yn = 0. From the central difference formula we
have
y00 ≈
yi+1 − 2yi + yi−1
h2
39
hence the equation (3.1) becomes
yi+1 − 2yi + yi−1 + h2 yi = 0,
i = 1, · · · , n − 1
from here obtaining yi s are straightforward and this sequence of partition points of the interval [0, 1]
is the solution to the equation (3.1. For problem (3.1) we set n = 4 partition points. A plot of the
error in the final approximation is shown in below
3.1.3
Adomian
As we may have noticed, most realistic system of ordinary differential equations do not have
analytic solutions so that a numerical technique must be used. But in the case of using the Adomians
decomposition method is that it can provide analytical approximation to the problems. This method
decompose the differential equations into Three operators . Invertible operator, Linear operator and
Non-Linear operator. The invertible operator is the the highest order of differential equation. The
output of this method is a function and we will see the outputs of Power Series and Our new methods
are also a function.
Consider
d2
y
dt 2
= f (t) + R(y) + N(y),
y(0) = α
y(h) = β
0≤t ≤1
(3.6)
40
Where L =
d2
dt 2
, is invertible operator,it can be written as L−1 (·) =
Rt Rs
0
0
·dxds, N is nonlinear op-
erator and R is stand for remaining part. The solution will be in form of y = ∑∞
k=0 yn , The way the
method technically works is we compute φn (t) = ∑nk=0 yn and then y = limn→∞ φn . If the limit can
be taken then a analytic solution can be obtained. The number n of iteration must be selected in
advance.
Applying the inverse linear operator L−1 (·) =
RtRs
0 0
· dx ds to both sides of equation (3.1) and
using boundary value at t = 0 yields
y(t) = 1 + γt − L−1 y(t)
where γ = y0 (0) is an unknown parameter.
By the standard recursion ascheme
y0 = 1 + γt
yn = −L−1 yn−1 ,
n = 1, 2, · · ·
We calculate the solution component as
1
1
y1 = − t 2 + γt 3
2
6
1 4
1
y2 = − t +
γt 5
24
120
···
And the approximation solution is given by
n−1
φn (t) =
∑ yn (t)
k=0
Therefore the approximate solutions are
41
φ1 = 1 + γt
1
1
φ2 = 1 + γt − t 2 + γt 3
2
6
1
1 2 1 3 1 4
γt 5
φ3 = 1 + γt − t + γt − t +
2
6
24
120
···
By matching the φn (t) at t = 1, we can then determine the value of γ as a convergent sequence of
approximate values. If we match φ3 (t) att = 1, then we need to solve
1
1
1
1
1 + γt − t 2 + γt 3 − t 4 +
γt 5 = 0
2
6
24
120
we obtain γ = −0.6435643568, and thus the approximate solution is
1
0.6435643568 3 1 4 0.6435643568 5
φ3 (t) = 1 − 0.6435643568t − t 2 −
t − t −
t
2
6
24
120
The exact value of γ is y0 (0) = −0.6420926160. The way Adomian explained in its standard form, required it
to be solved symbolically. However, if we would like to use MATLAB to solve it then the common procedure
is to solve it by linear shooting method which makes it tequivalent to Linear Power Series method. A plot of
the error in the final approximation y[5] (t) is shown in below
42
3.1.4
Power Series Method
The method is feasible because it is based on the decomposition (1.13) of the solution of (1.12) given
in Theorem 1.2.2. We assume the solution to the differential equations (1.14)) and (1.15) can be represented
as a power series of order n. We can recursively find coefficients in the power series because now our two
conditions are at the same point. The unknown solution is expanded in powers of t − a. The output of this
method is a function. To apply the method we need to the number of iterations n in advance.
Applying this method to the equation (3.1) requires approximating the solutions to the initial value
problems (3.3) and (3.4). If y1 (t) = ∑ni=0 ait i denotes the solutions to equation (3.3) by substituting it into
equations (3.3) we obtain
an = −
an−2
n(n − 1)
and from the initial value is easy to see that a0 = 1 and a1 = 0 therefore the approximate solution y1 (t) with
n = 4 is
1
1
y1 (t) = 1 − t 2 + t 4
2
24
(3.7)
If y2 (t) = ∑ni=0 bit i denotes the solutions to equation (3.4) similarly we obtain
1
y2 (t) = t − t 3
6
then by using
y(t) = y1 (t) +
β − y1 (1)
y2 (t)
y2 (1)
we obtain the the approximate solution
y(t) = 1 − 0.65000t − 0.50000t 2 + 0.10833t 3 + 0.041667t 4
to the solution of (3.1).
A plot of the error in the final approximation is shown in below
(3.8)
43
3.1.5
Our New Method
Now we are going to implement our new method to solve boundary value problem (3.1). We have al-
ready explained the process on Chapter (2.2). Since the method is based on picard’s iteration then knowledge
of initial guess is necessary.
We have introduced the γ = y0 (0) as an unknown parameter. The output of this method is a function.
The exact solution y(t), the approximations y[5] (t), with coefficients rounded to five decimal places, are
y = 1 − 0.64210t − 0.50000t 2 + .10702t 3 + 0.04167t 4 + 0.00534t 5
y[5] = 1 − 0.64444t − 0.50000t 2 + 0.11111t 3 + 0.04167t 4 + 0.00433t 5
and the exact value γ = −0.64210 can be approximated to γ[4] = −0.64444. A plot of the error in the final
approximation y[5] (t) is shown in the following Figure.
44
3.1.6
A Comparison of Methods
The following table compares the results of the five methods on problem (3.1). It gives the the ap-
proximation to the exact solution at the five partition points common to the Shooting and Finite Difference
methods.
t
Runge − Kutta
Finite − Di f f erence Power − Series
Adomian
New − Method
Exact
0.000
1.000000
1.000000
1.000000
1.000000
1.000000
1.000000
0.250
0.810051
0.810403
0.820313
0.810056
0.809530
0.810056
0.500
0.569740
0.570156
0.587500
0.569747
0.569010
0.569747
0.750
0.294009
0.294274
0.310938
0.294013
0.2934977
0.294014
1.000
0.000000
0.000000
0.000000
−0.000000
0.0000000
0.000000
Our method, like the power series method and Adomian, effectively approximates the solution at every point
of the interval [a, b], whereas the other methods give only a sequence of approximation points. Thus no
interpolation is needed for our method to give approximate values anywhere in the interval.
To compute elapsed time for each method we have written them y MATLAB and use the ”tic” ”toc”
45
command. The calculated elapsed time for each methods is given as follow,
Runge − Kutta
3.2
Finite − Di f f erence Power − Series New − Methodt
10 steps
10 steps
n = 10
n = 10
0.003175
0.013968
0.004003
0.002682
Non-Linear Case
In this section we apply the same five approximation methods introduced in previous section to the
nonlinear boundary value problem
y00 = − sin y,
y(0) = 1,
y(1) = 0,
(3.9)
whose exact solution is
y(t) = k arccos
3.2.1
2et−1
,
e2(t−1) + 1
−1
2e−1
k = arccos −2
.
e +1
Runge Kutta with Shooting Method)
The technique for the nonlinear second order boundary value problem
y00 = f (t, y, y0 ),
y(a) = α,
y(b) = β,
(3.10)
is similar to the linear case, except that the solution to a nonlinear problem can not be simply expressed
as a linear combination of the solution to two initial value problems. The idea is to write the boundary
value problem (3.10) as a sequence of initial value problems (3.11) and begin the solution at one end of the
boundary value problem, and shoot to the other end with an initial value solver until the boundary condition at
the other end converges to its correct value. The solutions of the initial value problems can be approximated
by a numerical scheme, in this case the fourth order Runge-Kutta method.
y00 = f (t, y, y0 ),
y(a) = α,
y0 (a) = γ,
(3.11)
The method is an iteration procedure based on a guess for an unknown initial condition, which in this case
is y0 (a) = γ. Therefore the choices to be made in applying this method are the number n of partition points
of the interval [a, b] must be selected in advance and initial guess for γ. We start with a parameter γ0 that
determines the initial elevation and shoot to the other end from the point (a, α) and if the object is not landed
sufficiently close to β then we correct our approximation. To correct the initial guess and generate a new
46
approximation we use Newton’s method to generate γk , which the method is in form of
γk = γk−1 −
y(b, γk−1 ) − β
dy
dt (b, γk−1 )
where y(b, γ j ) is the solution to initial value problem (3.12) at t = b.
y00 = f (t, y, y0 ),
y(a) = α,
y0 (a) = γ j ,
(3.12)
As the linear case the output of this method is not a function but a sequence of n + 1 points that approximate
the values of the unknown solutions at n + 1 t-values and then have linearly interpolated between them. We
applied the method to solve problem (3.9) for n = 7 and initial guess γ0 =
β−α
b−a .
A plot of the error in the final approximation y[7] (t) is shown in below
3.2.2
Finite Difference Method
The finite difference approximations for nonlinear case is similar to linear case. Except in this case, the
approximation technique leads to an (n − 1) × (n − 1) system of non-linear equations whose solution gives
as output a sequence of n + 1 points that approximate the values of the unknown solutions at n + 1 t-values,
where n is the number of subintervals into which [a, b] is partitioned. Similar to linear case, equation (3.1)
can be written as
yi+1 − 2yi + yi−1 + h2 sin(yi ) = 0,
i = 1, · · · , n − 1
because of the term sin(yi )then we need to solve a system of nonlinear equation which can be solved and
approximated by Newton’s method. A plot of the error in the final approximation is shown in below
47
3.2.3
Adomian
Applying the inverse linear operator L−1 (·) =
Rt Rs
0 0
· dx ds to both sides of equation (3.9) and using
boundary value at t = 0 yields
y(t) = 1 + γt − L−1 sin y(t)
where γ = y0 (0) is an unknown parameter.
Next we decompose the solution y(t) and nonlinearity sin y(t).
∞
y(t) =
∑ yn (t)
n=0
∞
sin y(t) =
∑ An
n=0
where the An are the Adomian polynomials for the nonlinearity Ny(t) = − sin y(t) and can be computed
by
"
1 dn
N
An =
n! dλn
Hence
!#
n
i
∑ λ yi
i=0
λ=0
48
A0 = − sin y0
A1 = −y1 cos y0
1
A2 = y21 sin y0 − y2 cos y0
2
···
By the standard recursion ascheme
y0 = 1 + γt
yn = L−1 An−1 ,
n = 1, 2, · · ·
We calculate the solution component as
y1 = −
1 sin 1 + cos 1tγ − sin(1 + γt)
2
γ2
1 5 sin 1 cos 1 + 4 sin2 1tγ + 6 cos2 1tγ + 4 sin 1 cos(1 + γt)
4
γ4
−8 cos 1 sin(1 + γt) + 4 cos 1 cos(1 + γt)γt − cos(1 + γt) − γt
+
γ4
y2 = −
···
And the approximation solution is given by
n−1
φn (t) =
∑ yn (t)
k=0
Therefore the approximate solutions are
49
φ1 = sin(1 + γt)
1 sin 1 + cos 1 (tγ) − sin(1 + γt)
2
γ2
1 sin 1 + cos 1 (tγ) − sin(1 + γt)
φ3 = sin(1 + γt) −
2
γ2
φ2 = sin(1 + γt) −
1 5 sin 1 cos 1 + 4 sin2 1 (tγ) + 6 cos2 1 (tγ) + 4 sin 1 cos(1 + γt)
4
γ4
−8 cos 1 sin(1 + γt) + 4 cos 1 cos(1 + γt)γt − cos(1 + γt) − γt
+
γ4
−
···
By matching the φn (t) at t = 1, we can then determine the value of γ as a convergent sequence of
approximate values. If we match φ2 (t) at t = 1, then we need to solve
sin(1 + γ) −
1 sin 1 + cos 1 (γ) − sin(1 + γ)
=0
2
γ2
Therefore, to solve for γ one has to approximate γ using a solver which makes Adomian computationally
challenging.
3.2.4
Power Series Method
The general approach to solve non-linear case for this method is same as the linear case. However, be-
cause the right hand sides of the first order differential equation system of two point boundary value problems
are polynomials, therefore the only difficulty we might encounter in compare to linear case is, in non-linear
case we need to apply “Cauchy product“ a few times. Here we will solve problem (3.9) by Power Series
method. The point that we need to make is in Power Series Method and Our New method we do not need to
compute the value of f (t, y, y0 ) instead we will compute the value of a sequence of polynomials which these
two methods compotationally friendly. It is apparent that we must introduce a variable v = sin y, and since
the derivative of sine is cosine, in addition a variable w = cos y. Thus
y0 = u
u0 = −v
v0 = uw
w0 = −uv
50
with initial conditions
y(0) = 1,
u(0) = γ,
v(0) = sin 1,
w(0) = cos 1,
We assume that we can write the solutions to the above system as a power series and then try to determine
what the coefficients need to be.
After 4 iterations, we get
y = 1 − t + 0.42074t 2 − 0.09005t 3 − 0.09005t 4 − 0.01612t 5 + 0.01977t 6 − 0.00792t 7
A plot of the error in the final approximation is shown in below
3.2.5
Our New Method
Our method take cares of the non-linear case exactly like the linear case, the only extra work that we
have to do here is we might need to introduce a few more auxiliary variables than in linear case. The problem
(3.9) has been solved for n = 7. Chapter2 example (refNex1) explains how to solve the problem in great
details. The exact solution y(t), and the approximations y[5] (t), with coefficients rounded to five decimal
places for the first 5 terms are
y[5] = 1 − 0.68115t − 0.42074t 2 + 0.06135t 3 + 0.03512t 4 + 0.00894t 5
y = 1 − 0.74853t − 0.28504t 2 − 0.01997t 3 + 0.03610t 4 + 0.01975t 5
A plot of the error in the final approximation y[8] (t) is shown in below
51
3.2.6
Methods Comparison
A quick comparison between methods is given in the following tables
t
Runge − Kutta Finite − Di f f erence
Power − Series New − Method
Exact
0.000
1.000000
1.000000
1.000000
1.000000
1.000000
0.143
0.894286
0.894394
0.893919
0.894254
0.887208
0.286
0.772692
0.772874
0.771963
0.772630
0.762679
0.429
0.636889
0.637104
0.635814
0.636797
0.626784
0.571
0.488986
0.489194
0.487617
0.488868
0.480525
0.714
0.331531
0.331695
0.330024
0.331399
0.325612
0.857
0.167457
0.167549
0.166228
0.167350
0.164448
1.000
−0.000004
0.000000
−0.000006
0.000000
0.000000
The calculated elapsed time for each methods is given as follow,
52
Runge − Kutta
Finite − Di f f erence Power − Series New − Methodt
10 steps
10 steps
n = 10
n = 10
0.032214
0.016092
0.007138
0.005649
CHAPTER 4: VOLTERRA INTEGRAL EQUATION
A Volterra integral equation of the second kind is an equation of the form
Z t
y(t) = ϕ(t) +
K(t, s) f (s, y(s)) ds
(4.1)
a
where ϕ, K, and f are known functions of suitable regularity and y is an unknown function. Such equations
lend themselves to solution by successive approximation using Picard iteration, although in the case that the
known functions are not polynomials the process can break down when quadratures that cannot be performed
in closed form arise. Equation (4.1) can be generalized to
Z t
y(t) = ϕ(t) +
K(t, s, y(s)) ds.
(4.2)
a
If the “kernel“ K is independent of the first variable t then the integral equation is equivalent to an initial
value problem. Indeed, in precisely the reverse of the process used in Chapter 2, we have that
Z t
y(t) = ϕ(t) +
K(s, y(s)) ds
a
is equivalent to
y0 (t) = ϕ0 (t) + K(t, y(t)),
y(a) = ϕ(a).
In this chapter we provide a method for introducing auxiliary variables in (4.2) in the case that the tdependence in K factors out, so that K(t, s, z) = f (t)k(s, z), in such a way that (4.2) embeds in a vectorvalued polynomial Volterra equation, thus extending the Parker-Sochacki method to this setting and thereby
obtaining a computationally efficient method for closely approximating solutions of (4.2).
Existence and uniqueness theorems for vector-valued Volterra integral equations are not common in the
literature, so we begin Section 4.1 by stating and proving such a theorem. The proof is based on an application
of the Contraction Mapping Theorem, so that we immediately obtain a theorem that guarantees convergence
of Picard iterates to the unique solution. We continue Section 4.1 by describing how to introduce auxiliary
variables to obtain a feasible, efficient computational procedure for approximating solutions of (4.2) when
K factors as K(t, s, z) = f (t)k(s, z). In Section 4.2 we illustrate the method with both linear and nonlinear
examples.
54
4.1
Uniqueness Theorem
For a subset S of Rm we let C(S, Rn ) denoted the set of continuous mappings from S into Rn . The
following theorem is a straightforward generalization to the vector-valued case of Theorem 2.1.1 of [?].
Theorem 4.1.1. Let I = [a, b] ⊂ R and J = {(x, y) : x ∈ I, y ∈ [a, x]} ⊂ I × I. Suppose ϕ ∈ C(I, Rn ) and
K ∈ C(J × Rn , Rn ) and that K is Lipschitz in the last variable: there exists L ∈ R such that
|K(x, y, z) − K(x, y, z0 )|sum 6 L|z − z0 |sum
for all (x, y) ∈ J and all z, z0 ∈ Rn . Then the integral equation
Z t
y(t) = ϕ(t) +
K(t, s, y(s)) ds
(4.3)
a
has a unique solution y(t) ∈ C(I, Rn ).
Proof. Let C denote the set of continuous mappings from I into Rn , C = C(I, Rn ). Since ϕ and K are
continuous, for any y ∈ C the right hand side of (4.3) is a continuous function of t. Thus we may define a
mapping T from C into itself by
T (y)(t) = ϕ(t) +
Z t
K(t, s, y(s)) ds.
a
An element y of C is a solution of (4.3) if and only if y is a fixed point of the operator T , which we will find
by means of the Contraction Mapping Theorem 1.3.5.
Let η and µ be elements of C . Then for all t ∈ I,
|(T η)(t) − (T µ)(t)|sum 6
Z t
a
Z t
6
a
η(s)) − K(t, s,µµ(s))|sum ds
|K(t, s,η
(4.4)
η(s) −µµ(s)|sum ds.
L|η
If we place, as usual, the supremum norm on C then we obtain from (4.4) the inequality
η −µµ||sup (b − a) for all t ∈ I.
|(T η )(t) − (T µ )(t)|sum 6 L||η
Taking the supremum over t ∈ I shows that T is a contraction provided L(b − a) < 1, and since C is complete
with respect to the supremum norm the theorem follows by the Contraction Mapping Theorem 1.3.5 , but with
the additional condition that L(b − a) < 1. To obtain the theorem as stated (that is, without the extra condition
that L(b − a) < 1) we fix any constant β > 1 and, following Hackbusch ( [?]), instead equip C with the special
55
norm
||y||H = max{e−β Lt |y(t)|sum : t ∈ I}.
It is readily verified that || · ||H does define a norm on C and that it is equivalent to the supremum norm, hence
C is also complete when equipped with the norm || · ||H . Since, for any η and µ in C and any s ∈ I,
η(s) −µµ(s)|sum = eβ L s e−β L s |η
η(s) −µµ(s)|sum
|η
η(t) −µµ(t)|sum : t ∈ I}
6 eβ L s max{e−β Lt |η
η −µµ||H ,
= eβ L s ||η
estimate (4.4) becomes
η −µµ||H
|(T η )(t) − (T µ )(t)|sum 6 L ||η
Z t
eβ L s ds
a
1 β Lt
η −µµ||H βL
[e − eβ L a ]
= L ||η
η −µµ||H
= β1 eβ Lt [1 − eβ L (a−t) ]||η
η −µµ||H
6 β1 eβ Lt [1 − eβ L (a−b) ]||η
for all t ∈ I. Multiplying by e−β Lt and taking the supremum over t ∈ I yields
||(T η ) − (T µ )||H 6
h
i
−(b−a)β L
1
)
β (1 − e
η −µµ||H .
||η
By our choice β > 1 the constant in brackets is strictly less than 1 so that T is a contraction on C with respect
to this norm, and the theorem follows by the Contraction Mapping Theorem 1.3.5. Because the theorem was proved by means of the Contraction Mapping Theorem we immediately obtain
the following result.
Theorem 4.1.2. Under the hypotheses of Theorem (4.1.1), for any choice of the initial mapping y[0] (t) the
sequence of Picard iterates
[k+1]
y
Z t
(t) = ϕ(t) +
K(t, s, y[k] (s)) ds
a
converges to the unique solution of the integral equation (4.3).
Now let a Volterra integral equation
Z t
y(t) = ϕ(t) +
f (t)k(s, y(s)) ds
a
(4.5)
56
be given, where ϕ, f ∈ C([a, b], R), k ∈ C([a, b] × R, R), and k satisfies a Lipschitz condition in y. Find functions h1 (t), . . . , hr (t) and polynomials P and Q in r indeterminates, R in r + 2 indeterminates, and P1 , . . . , Pr
in r + 1 indeterminates such that if y(t) solves (4.5) then
(i) ϕ(t) = P(h1 (t), . . . , hr (t)),
(ii) f (t) = Q(h1 (t), . . . , hr (t)),
(iii) k(s, y(s)) = R(s, y(s), h1 (s), . . . , hr (s)), and
(iv) v j (t) = h j (t) solves v0j = Pj (t, v1 , . . . , vr ), v j (a) = h j (a), 1 6 j 6 r.
The initial value problem obtained by adjoining to the system in (iv) the values of h1 through hr at a has
a unique solution, which is the unique solution of the vector-valued Volterra equation
Z t
v1 = v1 (a) +
a
P1 (s, v1 (s), . . . , vr (s)) ds
..
.
(4.6)
Z t
vr = vr (a) +
a
Pr (s, v1 (s), . . . , vr (s)) ds.
Adjoin to (4.6) the original Volterra equation in the form
Z t
y(t) = P(v1 (t), . . . , vr (t)) +
to obtain
a
Q(v1 (t), . . . , vr (t))R(s, y(s), v1 (s), . . . , vr (s)) ds
Z t
y = P(v1 (t), . . . , vr (t)) +
a
Q(v1 (t), . . . , vr (t))R(s, y(s), v1 (s), . . . , vr (s)) ds
Z t
v1 = v1 (a) +
a
P1 (s, v1 (s), . . . , vr (s)) ds
(4.7)
..
.
Z t
vr = vr (a) +
a
Pr (s, v1 (s), . . . , vr (s)) ds.
System (4.7) satisfies the hypotheses of Theorem (4.1.1), hence has a unique solution, as does the original
Volterra integral equation. Since v1 , . . . , vr are completely specified by (4.1) and (4.6), the y component of
the solution of the augmented Volterra integral equation (4.7) must be the solution of (4.5). But by Theorem
4.1.2 the Picard iteration scheme applied to (4.7), say with y[0] (t) ≡ y(0), converges and is computationally
feasible, so we obtain a computable approximation to the solution of (4.5).
4.2
Examples
In this section we illustrate the method by means of several examples which include both linear and
nonlinear Volterra integral equations. A Volterra equation (4.2) is said to be linear if the integrand factors as
K(t, s, y) = k(t, s)y and nonlinear if K(t, s, y) = k(t, s) f (s, y), where f (s, y) is a nonlinear function of y.
57
Example 4.2.1. Effati ( [?]) introduced the linear Volterra integral equation of the second kind
y(t) = expt sint +
Z t
2 + cost
2 + cos s
0
y(s) ds,
(4.8)
which is of the form (4.5) with ϕ(t) = expt sint, f (t) = 2 + cost, and the kernel of a form we can treat,
k(s, y(s)) = y(s)/(2 + cos s).
One appropriate choice of auxiliary variables is
v1 = exp(t)
v2 = cos(t)
v3 = sin(t)
v4 = 2 + v2
v5 =
1
,
v4
which satisfy the system of first order ordinary differential equations
v01 = v1
v02 = −v3
v03 = v2
v04 = v02 = −v3
v05 =
−v04
= v3 v25 ,
v24
which is equivalent to
Z t
v1 (t) = v1 (0) +
v2 (t) = v2 (0) −
0
v1 (s) ds
Z t
0
v3 (s) ds
Z t
v3 (t) = v3 (0) +
v4 (t) = v4 (0) −
0
0
Z t
v5 (t) = v5 (0) +
v2 (s) ds
Z t
0
v3 (s) ds
v3 (s)v25 ds.
The initial values of the auxiliary variables are determined by their definition. The initial value y(0) of the
solution of the integral equation (4.8) is found simply by evaluating that equation at t = 0 to obtain y(0) = 0.
58
Thus the iteration scheme is
[k] [k]
[k]
y[k+1] (t) = v1 v3 + v4
[k+1]
v1 (t) = 1 +
[k+1]
v2
[k+1]
v3
[k+1]
v4
[k+1]
v5
(t) = 1 −
0
Z t
[k]
v3 ds
v2 ds
Z t
[k]
0
(t) =
v1 ds
Z t
[k]
0
(t) =
0
Z t
[k]
0
(t) =
Z t
[k]
v5 y[k] ds
v3 ds
1
+
3
Z t
[k] [k]
v3 (v5 )2 ds
0
We can initialize as we please, but it is reasonable to choose
y[0] ≡ 0
[0]
v1 ≡ exp 0 = 1
[0]
v2 ≡ cos 0 = 1
[0]
v3 ≡ sin 0 = 0
[0]
[0]
v4 ≡ 2 + v2 = 3
1
[0]
v5 ≡ .
3
The exact solution of (4.8) is
y(t) = expt sint + expt 2 + cost
ln 3 − ln 2 + cost ,
whose Maclaurin series, with its coefficients rounded to five decimal places, begins
y(t) = 1.00000t + 1.50000t 2 + 0.83333t 3 + 0.16667t 4 − 0.03333t 5 − 0.02593t 6 − 0.00529t 7 + O(t 8 ).
The Maclaurin series of the eighth Picard iterate, y[8] (t), with its coefficients rounded to five decimal places,
begins
y(t) = 1.00000t + 1.50000t 2 + 0.83333t 3 + 0.16667t 4 − 0.03333t 5 − 0.02593t 6 − 0.00529t 7 + O(t 8 ).
59
A plot of the error in the approximation of the exact solution by y[8] (t) is given in Figure 4.1.
Example 4.2.2. In [?] Biazar introduced the nonlinear Volterra integral equation of the second kind
y(t) = 12 sin 2t +
Z t
0
2
3
2 y(s) cos(s − t) ds.
(4.9)
To fit this into the framework of (4.5) we begin by applying the cosine difference identity cos(t − s) =
cos s cost + sin s sint, obtaining
y(t) = 12 sin 2t + 23 (cost
Z t
y(s)2 cos s ds + sint
Z t
0
y(s)2 sin s ds).
0
Introducing the auxiliary variables v = cos s and w = sin s, which solve the system
v0 = −w,
w0 = v,
upon integration we obtain the equivalent system of integral equations
v(t) = v(0) −
Z t
w(s) ds
0
Z t
w(t) = w(0) +
v(s) ds.
0
The initial values of the auxiliary variables are determined by their definition. The initial value y(0) of the
solution of the integral equation (4.9) is found simply by evaluating that equation at t = 0 to obtain y(0) = 0.
Thus the iteration scheme is
[k+1]
y
[k+1]
w
Z t
Z t
[k]
[k]
[k] 2
[k]
[k]
[k] 2
(t) = w v + v (t) v (s)(y ) (s) ds + w (t) w (s)(y ) (s) ds
[k] [k]
Z t
(t) = 0 +
3
2
0
0
[k]
v (s) ds
0
v[k+1] (t) = 1 −
Z t
w[k] (s) ds.
0
We initialize with
y[0] (t) ≡ y(0) = 0
w[0] (t) ≡ sin 0 = 0
v[1] [0](t) ≡ cos 0 = 1.
60
The exact solution of (4.9) is y(t) = sint, whose Maclaurin series, with its coefficients rounded to five decimal
places, begins
y(t) = 1.00000t − 0.16667t 3 + 0.00833t 5 − 0.00020t 7 + O(t 9 ).
The Maclaurin series of the eighth Picard iterate, y[8] (t), with its coefficients rounded to five decimal places,
begins
y[8] (t) = 1.00000t − 0.16667t 3 + 0.008333t 5 + 0.00000t 7 + O(t 9 ).
A plot of the error in the approximation of the exact solution by y[8] (t) is given in Figure 4.2.
Example 4.2.3. As a final and somewhat more elaborate example consider the linear Volterra integral equation of the second kind given by
y(t) = tant − 41 sin 2t − 12 t +
Z t
0
1
ds.
1 + y2 (s)
(4.10)
This is a corrected version of an integral equation given by Vahidian and his colleagues in [?]. As explained
in the first paragraph of this chapter, because the integral part is independent of t, (4.10) must be equivalent
to an initial value problem. In fact the equivalent problem is
y0 (t) = sec2 t − 21 cos 2t − 12 +
1
,
1 + y2 (t)
y(0) = 0,
which was obtained by differentiating (4.10) to get the differential equation and by evaluating (4.10) at t = 0
to get the initial value. Of course by means of the identity cos2 t = 21 (1 + cos 2t) the differential equation can
be more compactly expressed as
y0 (t) = sec2 t − cos2 t +
1
,
1 + y2 (t)
which will be important later.
To approximate the unique solution of (4.10) we introduce the auxiliary variables
v1 (t) = sint
v2 (t) = cost
v3 (t) =
1
v2
v4 (t) = 1 + y2 ,
v5 (t) =
1
.
v4
(4.11)
61
Note that in contrast with the previous examples the unknown function y(t) figures into the definition of some
of these variables, but in a polynomial way. Thus when we compute their derivatives y also appears. Thanks
to (4.11), it does so in a polynomial way, since by that identity y0 = v23 − v22 + v5 and we have additionally
v01 = v2
v02 = −v1
v03 = v1 v23
v04 = 2y(v23 − v22 + v5 )
v05 = −2yv25 (v23 − v22 + v5 ).
This system of ordinary differential equations, together with the equation satisfied by y0 and the known initial
values of all the variables involved, is equivalent to the system of integral equations
y(t) = v1 v3 − 21 v1 v2 − 12 t +
Z t
0
v5 (s) ds
Z t
v1 (t) =
0
v2 (s) ds
v2 (t) = 1 −
Z t
0
Z t
v3 (t) = 1 +
0
v1 (s) ds
v1 (s)v23 (s) ds
Z t
v4 (t) = 1 + 2
v5 (t) = 1 − 2
0
Z t
0
y(s)(v23 (s) − v22 (s) + v5 (s)) ds
y(s)v25 (s)(v23 (s) − v22 (s) + v5 (s)) ds.
Setting up the obvious iteration scheme based on these integral equations, and initializing with the constant
functions y(t) ≡ y(0) and v j (t) ≡ v j (0) for j = 1, 2, 3, 4, 5, the Picard iterate y[28] (t) with coefficients rounded
to five decimal places is
y(t) = 1.00000t + 0.33333t 3 + 0.13333t 5 + 0.05397t 7 + 0.02187t 9 + 0.00886t 11 + O(t 13 ).
The exact solution is y(t) = tant, whose Maclaurin series, with coefficients rounded to five decimal places, is
y(t) = 1.00000t + 0.33333t 3 + 0.13333t 5 + 0.05397t 7 + 0.02187t 9 + 0.00886t 11 + O(t 13 ).
A plot of the error in the approximation of the exact solution by y[28] (t) is given in Figure 4.3.
62
Figure 4.1: The error plot for Volterra Integral, |y(t) − y[8] | with n = 7 iterations, where y(t) is the
exact solution and y[8] (t) is the approximate solution.
63
Figure 4.2: Error plot for nonlinear Volterra Integral, |y(t) − y[8] (t)| with n = 8 iterations, where y(t)
is the exact solution and y[8] (t) is the approximate solution.
.
64
Figure 4.3: The error plot for linear Volterra Integral, |y(t) − y[28] (t)| with n = 27 iterations, where
y(t) is the exact solution and y[28] (t) is the approximate solution.
CHAPTER 5: A MORE GENERAL ALGORITHM FOR APPROXIMATING SOLUTIONS OF
BOUNDARY VALUE PROBLEMS
In Chapter 2 we presented an efficient algorithm for approximating solutions of two-point boundary
value problems of the form
y00 = f (t, y, y0 ),
y(a) = α,
y(b) = β .
(5.1)
In this chapter we develop a modification of the algorithm that can extend its scope. The idea is to partition
the interval [a, b] into n subintervals and simultaneously and recursively approximate the solutions to the n
boundary value problems that are induced on the subintervals by (5.1) and its solution. We begin in Section
5.1 by illustrating the ideas involved in the simplest case, n = 2, where the key ideas in the modified method
are not obscured by the details and are thus most easily understood. In this first section we will also introduce
notation and several lemmas that will be useful later. In Section 5.2 we present the modified algorithm in the
general case and state and prove a theorem that guarantees its success. Finally, in Section 5.3 we illustrate
the method with several examples.
Throughout this chapter we will assume that the function f is continuous and Lipschitz in the last two
variables with Lipschitz constant L.
5.1
The Simplest Case
We begin by recalling from Chapter 1 the following fact, which for ease of future reference we separate
out as a lemma, but without a detailed statement of hypotheses (see (1.1) and (1.2) and Lemma 1.3.2).
Lemma 5.1.1. The one-dimensional two-point boundary value problem
y00 = f (t, y, y0 ),
y(c) = λ,
y(d) = δ
is equivalent to the vector integral equation


 
Z t
u(s)
λ
 ds

 =  + 
c
f (s, y(s), u(s))
u(t)
γ
Z d
1
γ=
δ−λ−
(d − s) f (s, y(s), u(s)) ds .
d −c
c

y(t)

(5.2a)
(5.2b)
66
It will be convenient to solve (5.2b) for λ, yielding
λ = δ − (d − c)γ −
Z d
(d − s) f (s, y(s), u(s)) ds.
(5.3)
c
We also note that if γ1 = y0 (c) and γ2 = y0 (d) then by the Fundamental Theorem of Calculus
Z d
γ2 = γ1 +
f (s, y(s), u(s)) ds.
(5.4)
c
For the following discussion we speak as if it were certain that problem (5.1) has a unique solution,
which we denote y(t) = η(t).
Partition the interval [a, b] into two subintervals of equal length h = (b − a)/2 and let the partition points
be denoted a = t0 < t1 < t2 = b. Then (5.1) and its solution η(t) induce two boundary value problems,
y00 = f (t, y, y0 ),
y(t0 ) = α,
y(t1 ) = η(t1 )
(5.5)
on [t0 ,t1 ] and
y00 (t) = f (t, y, y0 ),
y(t1 ) = η(t1 ),
y(t2 ) = β
(5.6)
on [t1 ,t2 ]. Letting β0 = α, β1 = η(t1 ), γ1 = η0 (t0 ), and γ2 = η0 (t1 ), the solutions of these boundary value
problems are the solutions of the initial value problems
y00 = f (t, y(t), y0 (t)),
y(t0 ) = β0 ,
y0 (t0 ) = γ1
(5.7)
y00 = f (t, y(t), y0 (t)),
y(t1 ) = β1 ,
y0 (t1 ) = γ2 ,
(5.8)
and
which we express as the systems of first order equations
and


 y0 = u
y(t0 ) = β0

 u0 = f (t, y, u)
u(t0 ) = γ1


 y0 = u
y(t1 ) = β1

 u0 = f (t, y, u)
u(t1 ) = γ2
or
 
β0
y = f(t, y), y(t0 ) =  
γ1
(5.70 )
or
 
β1
y = f(t, y), y(t1 ) =  
γ2
(5.80 )
67
Let

y1 (t) = 
y1 (t)


u1 (t)

y2 (t)


y2 (t) = 
u2 (t)
and
denote the solutions of the initial value problems (5.70 ) and (5.80 ), respectively. Then η(t) is the concatenation
of y1 (t) and y2 (t). See Figure (5.1)..
The idea now is to apply the algorithm of Chapter 2 to each of the boundary value problems (5.5) and
(5.6) simultaneously, coupling the work on the separate intervals at each step. That is, we wish to recursively
approximate the vector functions y1 (t) and y2 (t) and the constants γ1 , γ2 , and β1 better and better, updating
each one on each pass through the recursion.
Choose any initialization of the unknown functions and constants,
[0]
y1 (t),
[0]
u1 (t),
[0]
y2 (t),
[0]
u2 (t),
[0]
γ1 ,
[0]
γ2 ,
[0]
β1 .
68
[0]
[0]
[0]
[0]
We can update the approximations of y1 (t), u1 (t), y2 (t), and u2 (t) using the right hand side of (5.2a)
applied to each subinterval. Writing
def
f j (s) = f (s, y j (s), u j (s)),
j = 1, 2,
we update the approximation of γ2 in terms of the approximation of γ1 using (5.4) applied on [t1 ,t2 ], namely,
γ2 = γ1 +
Z t1
t0
f1 (s) ds.
(5.9)
We update the approximation of β1 in terms of β and the approximation of γ2 using (5.3), namely
β1 = β − hγ2 −
Z t2
t1
(t2 − s) f2 (s) ds.
(5.10)
Finally, we update the approximation of γ1 in terms of α and the approximation of β1 using (5.2b), namely
γ1 =
Z t1
1
β1 − α −
(t1 − s) f1 (s) ds .
h
t0
(5.11)
Introducing the simplifying notation
[r]
def
[r]
[r]
f j (s) = f (s, y j (s), u j (s)),
r ∈ Z+ ∪ {0},
j = 1, 2,
[r]
and the natural shorthand y j (t) the recurrence formulas are

def
[k+1]
y1 (t) =
[k+1]
y1
(t)



α
Z t

[k]
u1

 ds
 =  + 

[k]
[k]
[k+1]
t0
f1 (s)
γ1
u1 (t)

  


[k+1]
[k]
[k]
Z t
y
(t)
β
u
def
[k+1]
2
 =  1  +  2  ds
y2 (t) = 
[k+1]
[k]
[k]
t1
u2 (t)
γ2
f2 (s)
[k+1]
γ2
[k+1]
[k]
= γ1 +
[k+1]
Z t1
[k+1]
t0
Z t2
f1
(s) ds
[k+1]
= β − hγ2
−
(t2 − s) f2 (s) ds
t1
Z t1
[k+1]
[k+1]
[k+1]
1
γ1
= h β1
−α−
(t1 − s) f1 (s) ds .
β1
(5.12a)
(5.12b)
(5.12c)
(5.12d)
(5.12e)
t0
[k]
[k]
[k]
[k]
Clearly if y j (t), j = 1, 2, converge, then γ2 , β1 , and γ1 must converge. If the convergence is uniform then
we may pass the limits into the integrals to obtain the fact that the limit functions y1 (t) and y2 (t) solve the
69
ordinary differential equations in (5.7) and (5.8) and that
def
[k]
Γ j = lim γ j , j = 1, 2
and
k→∞
def
[k]
B1 = lim β1
k→∞
satisfy
Γ2 = Γ1 +
Z t1
t0
f1 (s) ds
(5.13)
Z t2
B1 = β − hΓ2 −
(t2 − s) f2 (s) ds
t1
Z t1
(t1 − s) f1 (s) ds .
Γ1 = h1 B1 − α −
(5.14)
(5.15)
t0
It is intuitively clear that the concatenation y(t) of the limits y1 (t) and y2 (t) solves the original boundary
value problem (5.1), but here are the details. Evaluating the limit of (5.12a) at t = t0 gives
y1 (t0 ) = α
(5.16)
y01 (t0 ) = Γ1 .
(5.17)
and
Inserting (5.17) into (5.15) gives
y01 (t0 ) = h1
Z t1
B1 − α −
(t1 − s) f (s, y1 (s), u1 (s)) ds
t0
so that by (5.2b)
B1 = y1 (t1 ).
(5.18)
B1 = y2 (t1 )
(5.19)
Γ2 = y02 (t1 ).
(5.20)
y1 (t1 ) = y2 (t1 ).
(5.21)
Evaluating the limit of (5.12b) at t = t1 gives
and
Equations (5.18) and (5.19) give
70
Inserting (5.19) and (5.20) into (5.14) gives
y2 (t1 ) = β − hy02 (t1 ) −
Z t2
t1
(t2 − s) f (s, y2 (s), u2 (s)) ds
so that by (5.3)
y2 (t2 ) = β.
(5.22)
Inserting (5.20) and (5.17) into (5.13) gives
y02 (t1 ) = y01 (t0 ) +
Z t1
t0
f (s, y1 (s), u1 (s)) ds
which by the Fundamental Theorem gives
y02 (t1 ) = y01 (t1 ).
(5.23)
Equations (5.16), (5.21), (5.22), and (5.23) imply that the concatenation y(t) of y1 (t) and y2 (t) is a C1 function
on [a, b] that has a second derivative on [a,t1 ) ∪ (t1 , b] that satisfies y00 (t) = f (t, y(t), y0 (t)) on [a,t1 ) ∪ (t1 , b].
Since the right hand side is continuous on [a, b] by Lemma 5.1.4, whose statement and proof are deferred to
the end of this section, y00 (t) exist on [a, b], hence y(t) solves the original boundary value problem (5.1).
[k]
[k+1]
In order to prove convergence of the sequence of iterates y j (t), j = 1, 2, we need estimates on |γ j
[k]
γ j |,
j = 1, 2 and on
[k+1]
and |γ1
[k+1]
|β1
[k]
− γ1 | in terms
[k]
− β1 |. It is clear that
[k]
[k+1]
− β1 |, but the
of |β1
[k+1]
[k]
[k+1]
we can estimate |γ2
− γ2 | in terms of |γ1
[k]
[k+1]
[k]
[k+1]
− γ2 |,
− β1 | involves |γ2
estimate on |β1
−
[k]
− γ1 |
hence
circularity. An idea for a way out is to insert the expression for γ1 in the last recurrence equation into that
for γ2 , insert the resulting expression for γ2 into the expression for β1 , and solve the resulting equation for
[k+1]
β1 . However, the step-indices on β1 do not match and the procedure yields an equation with β1
on the
[k]
left and β1 on the right. A solution to this dilemma is to follow the same procedure, not with the recurrence
equations, but with the equations (5.9) through (5.11) on which they are based. That is, insert the expression
for γ1 in (5.11) into (5.9), insert the resulting expression for γ2 into (5.10), and solve the resulting equation
for β1 . The result is
β1 =
1
2
Z t1
Z
β+α+
(t1 − s) f1 (s) ds − h
t0
t1
t0
f1 (s) ds −
Z t2
t1
(t2 − s) f2 (s) ds .
(5.24)
We base the update of the approximation of β1 on this equation. Thus the recursion for which we will prove
71
convergence is (5.12) but with (5.12d) replaced by
[k+1]
β1
=
1
2
Z t1
Z
[k+1]
β+α+
(t1 − s) f1 (s) ds − h
t0
t1
t0
[k+1]
f1
(s) ds −
Z t2
t1
[k+1]
(t2 − s) f2
(s) ds .
(5.12d0 )
Recall that we need concern ourselves only with the convergence of the vector-valued functions y1 (t)
and y2 (t) on [t0 ,t1 ] and [t1 ,t2 ] but that the convergence must be uniform in order to prove that we have found
a solution of the original boundary value problem. Thus we are concerned with the sequence (y1 (t), y2 (t))
in C([t0 ,t1 ], R2 ) × C([t1 ,t2 ], R2 ). We place the sup norm, the norm of uniform convergence, on each of the
function spaces, and do so with respect to the sum norm on R2 so that the norm can be brought inside
the integral. On the product of the function spaces we place the maximum norm. Thus for (u(t), v(t)) ∈
C([t0 ,t1 ], R2 ) ×C([t1 ,t2 ], R2 )
||(u(t), v(t))||max = max{||u(t)||sup , ||v(t)||sup }
and


u1 (t)
 ||sup = sup{|u1 (t)| + |u2 (t)| : t ∈ [t0 ,t1 ]}.
||u(t)||sup = || 
u2 (t)
These choices are illustrated by the routine computations in the proof of Lemma 5.1.3, which gives two
estimates that will be used repeatedly and which appears at the end of this section, where it has been collected
with other such results.
[k]
[k]
As a preliminary to proving the convergence of y1 (t) and y2 (t) we examine the sequences of constants
[k]
[k]
[k]
β1 , γ1 , and γ2 . First using (5.12d0 ) and recalling that f is Lipschitz in its last two entries with Lipschitz
constant L we estimate, appealing to Lemma 5.1.3 for the last inequality,
[k+1]
|β1
[k]
− β1 |
Z
h Z t1
[k+1]
[k]
(t1 − s)| f1 (s) − f1 (s)| ds + h
6 12
t1
[k+1]
| f1
t0
t0
+
Z t2
t1
Z
h Z t1
[k+1]
[k]
6 L
(t1 − s)|y1 (s) − y1 (s)| ds + hL
1
2
t0
t1
= Lh2 ||y[k+1] − y[k] ||max .
[k+1]
|y1
Z t2
t1
6 12 L 2 h2 + h2 + 12 h2 ||y[k+1] − y[k] ||max
1
[k+1]
(t2 − s)| f2
t0
+L
[k]
(s) − f1 (s)| ds
i
[k]
(s) − f2 (s)| ds
[k]
(s) − y1 (s)| ds
[k+1]
(t2 − s)|y2
[k]
(s) − y2 (s)| ds
i
72
Using (5.12e), this estimate, and Lemma 5.1.3 we obtain
[k+1]
|γ1
[k+1]
[k]
− γ1 | 6 h1 |β1
[k]
− β1 | + h1
Z t1
t0
[k+1]
(t1 − s)| f1
[k]
(s) − f1 (s)| ds
6 Lh||y[k+1] − y[k] ||max + 12 hL||y[k+1] − y[k] ||max
= 32 Lh||y[k+1] − y[k] ||max
which in turn with (5.12c) and Lemma 5.1.3 yields
[k+1]
|γ2
[k]
[k+1]
− γ2 | 6 |γ1
[k]
− γ1 + |
Z t1
t0
[k+1]
| f1
[k]
(s) − f1 (s)| ds
6 32 Lh||y[k+1] − y[k] ||max + Lh||y[k+1] − y[k] ||max
= 52 Lh||y[k+1] − y[k] ||max .
[k]
[k]
Turning now to the sequences y1 (t) and y2 (t), using these estimates and Lemma 2.1.2, for any t ∈
[t0 ,t1 ]

(s) 

−
[k]
[k−1]
f1 (s)
f1 (s) 
  

α
α
[k+1]
[k]

|y1 (t) − y1 (t)| 6   − 
[k−1] γ1[k]
γ1
+
Z t t0
[k]
u1 (s)


[k−1]
u1
ds
sum
sum
[k]
[k−1]
[k]
[k−1]
= |γ1 − γ1 | + (1 + L)h ||y1 − y1 ||sup
6 32 Lh||y[k+1] − y[k] ||max + (1 + L)h ||y[k] − y[k−1] ||max
= (1 + 25 L)h ||y[k] − y[k−1] ||max
hence
[k+1]
||y1
[k]
− y1 ||sup 6 (1 + 25 L)h ||y[k] − y[k−1] ||max
and, assuming that h2 6 h, for any t ∈ [t1 ,t2 ]
[k+1]
[k]
|y2 (t) − y2 (t)|sum
  

[k]
[k−1] β1
β1

6   − 
[k]
[k−1] γ2
γ2

+
sum

(s) 
−

[k]
[k−1]
f2 (s)
f2 (s) Z t t1
[k]
u2 (s)


[k−1]
u2
[k]
[k−1]
[k]
[k−1]
[k]
[k−1]
= |β1 − β1 | + |γ2 − γ2 | + (1 + L)h ||y2 − y2 ||sup
6 [Lh2 + 52 Lh + (1 + L)h] ||y[k] − y[k−1] ||max
= (1 + 29 L)h ||y[k] − y[k−1] ||max
sum
ds
73
hence
[k+1]
||y2
[k]
− y2 ||sup 6 (1 + 29 L)h ||y[k] − y[k−1] ||max .
Thus if (1 + 29 L)h < 1 then by Lemma 5.1.2 below the sequence

[k]
y1 (s)
 
[k]
y2 (s)

[k]
[k]
,

y[k] (t) = (y1 (t), y2 (t)) = 
[k]
[k]
u1 (s)
u2 (s)
is a Cauchy sequence in the complete space C([t0 ,t1 ], R2 ) ×C([t1 ,t2 ], R2 ), hence converges, hence the component functions converge uniformly on [t0 ,t1 ] and [t1 ,t2 ]. This condition was derived on the assumption that
h 6 1. If it fails we simply replace h by h2 in the last two estimates to obtain the similar sufficient condition
(1 + 29 L)h2 < 1 for convergence. We could also examine more elaborate conditions involving both h and h2 ,
or even take into account that the Lipshcitz constants for f restricted to [t0 ,t1 ] and [t1 ,t2 ] could be smaller
than L. In any event, even with the least complicated condition it is straightforward to verify that in some
situations, depending on the sizes of L and b − a, the condition obtained by this partition method is an improvement over the estimate given in Theorems 2.1.3 and 2.1.4 for existence of a solution of (5.1) and the
convergence of the sequence of iterates to it.
This section has presented the key ideas in the extension of the method of Chapter 2 based on a partition
of the interval [a, b] into subintervals. We now state and prove the three lemmas that were needed in this
simplest case, and which will also be needed in the general case discussed in the next section.
Lemma 5.1.2. Let x[k] be a sequence in a normed vector space (V, | · |). If there exist a number c < 1 and an
index N ∈ N such that
|x[k+1] − x[k] | 6 c|x[k] − x[k−1] | for all
k>N
then the sequence x[k] is a Cauchy sequence.
Proof. Applying the condition |x[k+1] − x[k] | 6 c|x[k] − x[k−1] | a total of k times we have
|x[N+k] − x[N+k−1] | 6 ck |x[N] − x[N−1] |
(k > 1)
74
so that for any m > n by applying triangle inequality we get
|x[N+m] − x[N+n] | 6 |x[N+m] − x[N+m−1] | + · · · + |x[N+n+1] − x[N+n] |
6 (cm + cm−1 + · · · + cn+1 )|x[N] − x[N−1] |
6 (cn+1 + cn+2 + · · · )|x[N] − x[N−1] |
n+1 c
=
|x[N] − x[N−1] |
1−c
n c
=
|x[N] − x[N−1] |
1−c
which can be made arbitrarily small by choosing n sufficiently large. Lemma 5.1.3. Suppose the interval [a, b] has been partitioned into n subintervals of equal length h = (b −
a)/n by partition points a = t0 < t1 < · · · < tn−1 < tn = b. With the notation

y j (t)


y j (t) = 
u j (t)
[r]
[r]
[r]
r ∈ Z+ ∪ {0},
f j (s) = f (s, y j (s), u j (s)),
and
j = 1, 2,
the following estimates hold:
Z tj
t j−1
[k+1]
|fj
[k]
(s) − f j (s)| ds 6 L h||y[k+1] − y[k] ||max
(5.25)
and
Z tj
t j−1
[k+1]
(t j − s)| f j
[k]
(s) − f j (s)| ds 6 12 L h2 ||y[k+1] − y[k] ||max .
Proof. For
Z tj
t j−1
[k+1]
[k]
|fj
(s) − f j (s)| ds
Z tj
6
6
t j−1
[k+1]
L ||y j
[k]
(s) − y j (s)||sum ds
[k+1]
[k]
L ||y j
− y j ||sup
Z tj
t j−1
ds 6 L h ||y[k+1] − y[k] ||max
and
Z tj
t j−1
[k+1]
(t j − s)| f j
[k]
(s) − f j (s)| ds 6
Z tj
[k+1]
t j−1
(t j − s) L ||y j
[k+1]
6 L ||y j
[k]
− y j ||sup
[k]
(s) − y j (s)||sum ds
Z tj
t j−1
t j − s ds
6 21 L h2 ||y[k+1] − y[k] ||max . (5.26)
75
Lemma 5.1.4. Suppose η : (−ε, ε) → R is continuous and that η0 exists on (−ε, 0) ∪ (0, ε). Suppose g :
(−ε, ε) → R is continuous and that g = η0 on (−ε, 0) ∪ (0, ε). Then η0 exists and is continuous on (−ε, ε).
Proof. Since η is continuous at 0 we have the existence and value of the limit
lim η(h) = η(0).
h→0
This fact and the hypothesis that η0 exists on (−ε, 0) ∪ (0, ε) imply that l’Hˆopital’s Rule applies, and with the
remaining hypotheses gives the existence and value of the limit
η0 (0) = lim
h→0
η(h) − η(0)
= lim η0 (h) = lim g(h) = g(0).
h→0
h→0
h−0
But then η0 exists and is equal to the continuous function g on (−ε, ε), giving the result. 5.2
The General Case
If η(t) is a solution of the original boundary value problem (5.1) and the interval [a, b] is subdivided
into n subintervals of equal length h by means of a partition
a = t0 < t1 < t2 < · · · < tn = b
then setting β j = η(t j ), j = 1, . . . , n − 1, we see that n boundary value problems are induced:
y00 = f (t, y, y0 )
y00 = f (t, y, y0 )
y00 = f (t, y, y0 )
...
y(t0 ) = α, y(t1 ) = β1
y(t1 ) = β1 , y(t2 ) = β2
y(tn−1 ) = βn−1 , y(tn ) = β.
Setting γ j = η0 (t j−1 ), j = 1, . . . , n, or computing them by means of an appropriate implementation of (5.2b),
their solutions are solutions of the respective initial value problems
y00 = f (t, y, y0 )
y00 = f (t, y, y0 )
y00 = f (t, y, y0 )
...
y(t0 ) = α, y0 (t0 ) = γ1
y(t1 ) = β1 , y0 (t1 ) = γ2
y(tn−1 ) = βn−1 , y0 (tn−1 ) = γn .
As in the previous section we denote the solutions to these problems by y j (t) with derivatives u j (t) := y0j (t),
j = 1, 2, 3. See Figure (5.2)..
76
To make the presentation cleaner and easier to read we continue and expand the shorthand notation of the
previous section (for relevant choices of j):
f j (s) = f (s, y j (s), u j (s))


u j (s)

y j (s) = 
y j (s)
[k]
[k]
[k]
f j (s) = f (s, y j (s), u j (s))


[k]
u
(s)
j
[k]

y j (s) = 
[k]
y j (s)
(5.27)
Z tj
I j (s) =
Z tj
J j (s) =
t j−1
[k]
f j (s) ds
I j (s) =
(t j − s) f j (s) ds
[k]
J j (s) =
t j−1
Z tj
t j−1
Z tj
t j−1
[k]
f j (s) ds
[k]
(t j − s) f j (s) ds.
77
The idea for generating a sequence of successive approximations of η(t) is to update the estimates of
the functions y j (t) using (5.2a) in the form




Z tj
β j−1
u j (s)
+
 ds

y j (t) = 
t j−1
γj
f j (s)
(5.28)
and then update γ j and β j using (5.4) and (5.3) in the forms
γ j = γ j−1 + I j−1
and
β j−1 = β j − h γ j − J j
(5.29)
(starting with j = 1 and working our way up to j = n for the γ j and in the reverse order with the β j , with the
convention that βn = β), except that on the first step we update γ1 using instead (5.2b) in the form
γ1 = h1 [β1 − α − J1 ]
(5.30)
and there is no β0 . But as in the case n = 2 we will not be able to make the estimates that we need to show
convergence if we update β1 on this basis. We will address this problem in a manner directly analogous to
what was done in the simpler case. Before we do, however, note that the updates on the β j come from “the
right,” i.e., values of βr with r > j, hence ultimately tying into β at each pass through the recursion, while the
updates on the γ j come from “the left,” i.e., values of γr with r < j, hence ultimately tying into α at each pass
through the recursion.
To obtain a useful formula on which to base the successive approximations of β1 , we begin by using the
second formula in (5.29) n − 1 times:
β1 = β2 − (h γ2 + J2 )
= β3 − (h γ3 + J3 ) − (h γ2 + J2 )
= β4 − (h γ4 + J4 ) − (h γ3 + J3 ) − (h γ2 + J2 )
..
.
= β − (h γn + Jn ) − · · · − (h γ2 + J2 )
= β − h(γ2 + · · · + γn ) − (J2 + · · · + Jn ).
78
But by repeated application of the first equation in (5.29) and use of (5.30) on the last step
γ2 + · · · + γn−2 + γn−1 + γn
= γ2 + · · · + γn−2 + 2γn−1 + In−1
= γ2 + · · · + 3γn−2 + 2In−2 + In−1
..
.
= (n − 1)γ2 + (n − 2)I2 + · · · + 3In−3 + 2In−2 + In−1
= (n − 1)γ1 + (n − 1)I1 + (n − 2)I2 + · · · + 3In−3 + 2In−2 + In−1
=
n−1
[β1 − α − J1 ] + (n − 1)I1 + (n − 2)I2 + · · · + 3In−3 + 2In−2 + In−1 .
h
Inserting this expression into the previous display and solving the resulting equation for β1 yields the formula
"
#
n
n−1
1
β + (n − 1)α + (n − 1)J1 − ∑ Jr − h ∑ (n − r)Ir .
β1 =
n
r=2
r=1
(5.31)
Once an initialization has been chosen, an iteration procedure based on (5.28), (5.29), (5.30), and (5.31) is,
with the convention β0 = α and βn = β, the shorthand notation introduced above, and order of evaluation in
the order listed,




[k]
[k]
Z tj
β
u
(s)
[k+1]
j−1 
 j
 ds
y j (t) = 
+
[k]
[k]
t
j−1
γj
f j (s)
[k+1]
β1
j = 1, · · · , n
"
#
n
n−1
1
[k+1]
[k+1]
[k+1]
β + (n − 1)α + (n − 1)J1
− ∑ Jr
− h ∑ (n − r)Ir
=
n
r=2
r=1
[k+1]
γ1
[k+1]
= γ j−1 + I j−1
[k+1]
−hγj
γj
[k+1]
[k+1]
= 1h [β1
β j−1 = β j
[k+1]
[k+1]
[k+1]
[k+1]
− Jj
[k+1]
− α − J1
]
j = 2, · · · , n
j = n, n − 1, . . . , 4, 3
(5.32a)
(5.32b)
(5.32c)
(5.32d)
(5.32e)
Theorem 5.2.1. Suppose the function f (t, y, u) from [a, b] × R2 into R is continuous and Lipschitz in y =
(y, u) with Lipschitz constant L. If there exists an integer n > 1 such that for the subdivision of [a, b] into n
79
subintervals of equal length h by the partition
a = t0 < t1 < · · · < tn−1 < tn = b
the inequality
1
[(n3 + n2 + n + 2)L + 2](b − a) < 1
2n
holds if h = (b − a)/n 6 1 or the inequality
1
[(n3 + n2 + n + 2)L + 2](b − a)2 < 1
2n2
holds if h = (b − a)/n > 1, then there exists a solution of the two-point boundary value problem (5.1). Moreover, in the language of the notation of Figure B and display (5.27), for any initial choice of the functions
y j (t) = (y j (t), u j (t)), 1 6 j 6 n, the constants γ j , 1 6 j 6 n, and the constants β j , 1 6 j 6 n − 1, the sequence
of successive approximations defined by (5.32) converges to such a solution.
Proof. As was done in the simple case n = 2 we will show that the sequence
y(t) = (y1 (t), . . . , yn (t)) ∈ C([t0 ,t1 ], R2 ) ×C([t1 ,t2 ], R2 ) × · · · ×C([tn−1 ,tn ], R2 )
is a Cauchy sequence by means of Lemma 5.1.4, where we place the supremum norm on each function
space, with respect to absolute value on R and the sum norm on R2 , and the maximum norm on their product.
Precisely the same reasoning as was given in the paragraph that follows display (5.15) shows that this is
sufficient to show that the successive approximations converge to functions on the individual subintervals
which when concatenated form a solution of (5.1), appealing to Lemma 5.1.4 for existence of the second
derivative at the partition points.
By Lemma 5.1.3
Z
[k+1]
[k]
|Ir
− Ir | = tr
Z tr
tr−1
Z tr
6
[k+1]
fr
(s) ds −
tr−1
[k+1]
| fr
[k]
fr (s) ds
[k]
(5.33)
(s) − fr (s)| ds
tr−1
6 L h||y[k+1] − y[k] ||max
and
Z
[k]
[k+1]
|Jr
− Jr | = tr
tr−1
Z tr
6
tr−1
[k+1]
(tr − s) fr
(s) ds −
Z tr
tr−1
[k+1]
[k]
(tr − s)| fr
(s) − fr (s)| ds
6 21 L h2 ||y[k+1] − y[k] ||max .
[k]
(tr − s) fr (s) ds
(5.34)
80
Then from (5.32b) we have
[k+1]
|β1
[k]
n n−1
n − 1 [k+1]
[k+1] [k] [k+1]
[k] [k] − Ir − Jr + h ∑ (n − r) Ir
− J1 + ∑ Jr
J1
n
r=1
r=2
"
#
n
n−1
n−1 1 2
1
6
h + ∑ h2 + h ∑ (n − r)h L||y[k+1] − y[k] ||max
n
2
r=2 2
r=1
− β1 | 6
= Bb1 h2 L||y[k+1] − y[k] ||max ,
where Bb1 =
n−1
2
1
n
+1+n .
From (5.32c)
[k+1]
|γ1
1 [k+1]
1 [k+1]
[k]
[k]
[k]
− γ1 | 6 |β1
− β1 | + |J1
− J1 |
h
h
1
6 [Bb1 hL + hL]||y[k+1] − y[k] ||max
2
b1 hL||y[k+1] − y[k] ||max
=Γ
and from repeated application of (5.32d), starting from j = 2 up through j = n,
[k+1]
|γ j
[k]
[k+1]
[k+1]
[k]
[k]
− γ j | 6 |γ j−1 − γ j−1 | + |I j−1 − I j−1 |
b j−1 hL + hL]||y[k+1] − y[k] ||max
6 [Γ
b j hL||y[k+1] − y[k] ||max
=Γ
b j = Bb1 + 2 j−1 , which is in fact valid for 1 6 j 6 n.
with Γ
2
From repeated application of (5.32e), starting from j = n − 1 (with the convention that βn = β) and
down through j = 2,
[k+1]
|β j
[k]
[k+1]
[k]
[k+1]
[k]
[k+1]
[k]
− β j | 6 |β j+1 − β j+1 | + h|γ j+1 − γ j+1 | + |J j+1 − J j+1 |
b j+1 h2 L + 1 h2 L]||y[k+1] − y[k] ||max
6 [Bb j+1 h2 L + Γ
2
= Bb j h2 L||y[k+1] − y[k] ||max ,
(n− j)(n− j−1)
b
b
and Bbn−r = rBb1 + r n − r(r−1)
, 2 6 j 6 n − 1.
2 , hence B j = (n − j)B1 + n(n − j) −
2
81
Using these estimates we find that, setting h∗ = max{h, h2 }, for any j ∈ {2, . . . , n}, for any t ∈ [t j−1 ,t j ],
[k+1]
|y j
[k]
(t) − y j (t)|sum


[k]
[k−1] β j−1 − β j−1 
6 
[k−1]
γ[k]
j −γj

sum
6

(s) 

+
t j−1 f [k] (s) − f [k−1] (s) j
j
Z t j [k]
[k−1]
u j (s) − u j
[k]
[k−1]
[k]
[k−1]
|β j−1 − β j−1 | + |γ j − γ j | + (1 + L)
ds
sum
Z tj
t j−1
[k]
[k−1]
|y j (s) − y j (s)|sum ds
b j hL] ||y[k] − y[k−1] ||max + (1 + L)||y[k] − y[k−1] ||sup h
6 [Bb j−1 h2 L + Γ
j
j
b j L + (1 + L)]h∗ ||y[k] − y[k−1] ||max
6 [Bb j−1 L + Γ
b j + 1)L + 1]h∗ ||y[k] − y[k−1] ||max
= [(Bb j−1 + Γ
= c j h∗ ||y[k] − y[k−1] ||max
b1 ,
where, using the expressions above for Bb1 and Γ
cj =
1 4
2n [n + (3 −
[k+1]
Similarly, for all t ∈ [t0 ,t1 ], |y1
j)n3 + n2 + (3 j − j2 )n + ( j − 2)]L + 1
(2 6 j 6 n).
[k]
(t) − y1 (t)|sum 6 c1 h||y[k] − y[k−1] ||max for
n3 + 3n − 1
L + 1.
c1 =
2n
Then for all j ∈ {1, . . . , n},
[k]
[k−1]
||y j − y j
||sup 6 c j h∗ ||y[k] − y[k−1] ||max
and
||y[k+1] − y[k] ||max 6 max{c1 , . . . , cn }h∗ ||y[k] − y[k−1] ||max .
For fixed n, for j > 2, c j is a quadratic function of j with maximum at j =
−n3 +3n+1
,
2n
(5.35)
which is negative
for n > 2, so c2 > c j for j > 3. Direct comparison shows that c2 > c1 for all choices of n as well. Thus
estimate (5.35) is
||y[k+1] − y[k] ||max 6 c2 h∗ ||y[k] − y[k−1] ||max
and by Lemma 5.1.2 the sequence y[k] (t) is a Cauchy sequence provided c2 h∗ < 1, which, when h = (b −
a)/n 6 1 is the condition
1
[(n3 + n2 + n + 2)L + 2](b − a) < 1
2n
82
and when h = (b − a)/n > 1 is the condition
1
[(n3 + n2 + n + 2)L + 2](b − a)2 < 1. 2n2
5.3
Examples
In this section we illustrate the method by means of several examples.
Example 5.3.1. Consider
y00 = 2y3
y(0) = −
1
2
(5.36)
y(1) = −1
1
The exact solution is y(t) = t−2
. Without loss of generality we divided the interval [0, 1] into the 3-subintervals
not necessary of the same lengths.
The reason we pick different length because the Lipschitz constant for the entire interval is same as
the last subinterval or I should say If we look at subintervals then Lipschitz constant will be increasing for
subintervals from left to right. Due to this fact then we can have have length of subintervals decreasing
from left to right. However, in general we do not have a nice algorithm to do that so to be safe we need to
make each subinterval less than
1
1+ 32 L
which is provided by theorem (2.1.4). For this problem the Lipschitz
constant is L = 6 and in order to be in the safe side we need interval’s length to be less than
we introduce the following subintervals
[0,
13 13 15 15
], [ , ], [ , 1]
16 16 16 16
After 4 iterations we will have a maximum error of E = 0.018 and error plot will be
1
1+ 32 (6)
=
1
10 .
So
83
Example 5.3.2. Consider again the problem (5.37) but this time with different boundary conditions
y00 = 2y3
y(0) = −
10
11
(5.37)
y(1) = −10
The exact solution is y(t) =
1
t−1.1 .
The Lipschitz constant is L = 600 and in order to be in the safe side
we need interval’s length to be less than
1
1+ 32 (600)
=
1
901 .
This problem can be solved by more sophisticated
software such as “MATLAB“. The softwares “MAPLE“ and “MATHEMATICA“ are not recommended here
since they are mostly symbolically software.
Example 5.3.3. Consider
y00 = −e−2y
(5.38)
y(0) = 0
y(1) = ln(2)
The exact solution is y(t) = ln(1 + t). Without loss of generality we divided the interval [0, 1] into the 3subintervals not necessary of the same size. For this problem the Lipschitz constant is L =
to be in the safe side we need interval’s length to be less than
1
1+ 32 ( 21 )
1
2
and in order
= 47 . So we introduce the following
84
subintervals
3 3 7 7
[0, ], [ , ], [ , 1]
4 4 8 8
After 4 iterations we will have a maximum error of E = 0.028 and error plot will be