-1- Y. Zarmi ⎛

-1-
Y. Zarmi
PART II: PERTURBATION METHODS
4. How to assess the quality of an approximation
4.1 Preliminaries
We will be mainly interested in weakly perturbed harmonic motion:
d 2x
⎛⎜ dx
⎞
2
,t; ε⎟
2 + ω 0 x = ε f x,
⎝ dτ
⎠
dτ
(4. 1)
Change the time variable into a dimensionless one:
t = ω0 τ
(4. 2)
ε
f (x, x˙,t;ε )
ω0 2
(4. 3)
In terms of t, Eq. (4.1) becomes
x˙˙ + x =
Thus, one can always assume that the unperturbed frequency is equal to unity, and forget about the
auxiliary variable, τ.
Eq. (4.3) can be transformed into a pair of first order coupled equations in two unknowns:
x˙ = y
y˙ = − x + ε f ( x, x˙,t;ε )
(4. 4)
or
⎛0
⎞
d ⎛ x ⎞ ⎛ 0 1⎞ ⎛ x ⎞
⎜ ⎟=⎜
⎟ ⎜ ⎟ + ε⎜
⎟
dt ⎝ y ⎠ ⎝ −1 0⎠ ⎝ y ⎠
⎝ f ( x,y,t ;ε )⎠
(4. 5)
where ω0 has been replaced by 1. One can also use polar coordinates
x = ρ cosθ
y = −ρsin θ
(4. 6)
to obtain
r˙ = −ε sinθ f
cosθ
θ˙ = 1 − ε
f
r
(4. 7)
θ =1+ϕ
(4. 8)
Now, defining a new angular variable,
we obtain
Nonlinear
Dynamics
-2-
⎛ −sin ( t + ϕ ) f
⎞
⎛ g(r,ϕ, t;ε )⎞
d ⎛r ⎞
⎜
⎟
=
ε
≡
ε
⎜
⎟
⎜ ⎟
⎜ − cos t + ϕ r f ⎟
dt ⎝ ϕ⎠
h
r,ϕ,t;
ε
(
)
(
)
(
)
⎝
⎠
⎝
⎠
(4. 9)
4.2 Error estimates [9]
With either Eqs. (4.5) or (4.9), the problem has been reduced from a slightly perturbed harmonic
oscillator (second order differential equation in one variable) to two first order coupled equations in
two variables with a small nonlinear perturbation. We shall, therefore, address a somewhat
generalized problem of the form
dx
= ε f (x,t;ε )
dt
x (t = 0) = x 0
(4.10)
x, f ∈ D ⊆ R
n
where f satisfies the Lipschitz condition in the domain D. A unique solution x(t) of Eq. (4.10)
exists for some time interval 0≤t≤t1 , due to the uniqueness and existence theorem. Assume that a
function y(t)∈Rn is considered as an approximation for the exact (but unknown) solution of Eq.
(4.10). We do not expect y(t) to obey Eq. (4.10) but, rather,
dy
= ε f (y,t;ε ) + ρ(t; ε)
dt
y(t = 0) = y0
x 0 − y 0 ≡ α (ε )
(4.11)
For y to be a good approximation for x, ρ(t;ε) and α(ε) must be small in some sense. In fact, they
should satisfy (at least)
ρ(t;ε ), α (ε ) ⎯⎯
ε⎯
→0→ 0
(4.12)
Our goal now is to estimate the quality of approximation y(t) constitutes for x(t) as well as the time
span over which that quality is retained. To this end we consider the equation satisfied by ξ≡x−y.
dξ
= ε [f ( y(t ) + ξ(t ),t;ε ) − f ( y(t ),t;ε )] − ρ( t;ε )
dt
ξ(t = 0 ) = α( ε)
(4.13)
The formal solution of Eq. (4.13), at least for 0≤t≤t1 , is
t
t
0
0
ξ(t ) = α (ε ) − ∫ ρ (s; ε) ds + ε ∫ [f ( y(s) + ξ( s),s;ε ) − f (y( s),s;ε )]ds
(4.14)
One is tempted to make a Taylor expansion in the third term in Eq. (4.14), but one can do better
using the following trick:
1
1
0
0
⌠
∂2
⌠
d2
Z ≡ ⎮ (1 − η)
f ( x = y + ηξ) ξi ξ j dη = ⎮ (1 − η) 2 f (x = y + ηξ) dη
⌡
⌡
∂x i ∂x j
dη
(Summation over dummy indices is assumed). Integration by parts gives
-3-
Y. Zarmi
1
1
⎡
df ⎤ ⌠ df
df
Z = ⎢(1 − η) ⎥ + ⎮ dη = − (η = 0) + f ( η = 1) − f (η = 0)
⌡
dη ⎦ 0
dη
dη
⎣
0
=−
∂
f (y) ξi + f ( y + ξ ) − f (y )
∂yi
Eq. (4.14) becomes
t
ξ(t ) = α (ε ) − ∫ ρ( s; ε) ds +
0
t
1
⌠⎡
⎤
ε ⎮ ⎢ hi (y( s),s;ε ) ξi ( s) + ∫ (1 − η) gij (y(s) + ηξ(s),s; ε) ξ i (s) ξ j (s) dη⎥ ds
⌡⎣
⎦
0
4.15)
0
hi ≡
∂f
∂xi
∂ 2f
∂x i ∂x j
gij ≡
For an appropriately "well behaved" f, and a "sufficiently good" approximation y(t), assume that
h ≤M
g ≤N
ρ ≤ ρ0
(4.16)
x ∈D
t ≤T ε
with M, N, ρ0 constants. With these bounds, Eq. (4.15) yields an inequality
t
t
0
0
ξ(t ) ≤ α (ε ) + ρ0 t + ε M ∫ ξ( s) ds + 12 ε N ∫ ξ(s)
t
2
ds
t
(4.17)
≤ α (ε ) + ρ0 (T ε ) + ε M ∫ ξ(s) ds + ε N ∫ ξ(s) ds
0 
0 


1
2
2
S( t )
In a manner similar to the proof of the Gronwall lemma, Eq. (4.17) yields
dS
= ε M ξ + 12 ε N ξ
dt
2
dS
≤ ε dt
S (M + 12 N S)
≤ ε S ( M + 12 N S )
⇒
S M + 12 N L
≤ exp (ε M t )
L M + 12 N S
M
exp(ε Mt )
M + 12 N L
S≤
L
1
NL
1− 2 1
exp( ε M t )
M +2NL
Consequently,
(S(t = 0) = L)
⇒
⇒
(4.18)
Nonlinear
Dynamics
-4-
ξ (t ) ≤ ( α (ε ) + ρ 0 (T ε )) C
C≡
M exp( M T)
,
M − N L (exp (M T ) − 1)
1
2
t ≤ T ε (4.19)
Thus, while all we know about the solution for Eq. (4.10) is that a unique solution exists for x(t)
for some interval in time, the properties of f(x,t;ε) and y(t) [Eq. (4.16)] guarantee that Eq. (4.13) for
ξ(t) has a unique and bounded solution for 0≤t≤T/ε. But, given the approximation y(t), this implies
that x(t) has a solution for 0≤t≤T/ε, satisfying
x (t ) = y(t ) + ξ( t )
x(t ) − y(t ) = ξ(t ) ≤ ( α (ε) + ρ 0 ( T ε)) C
(4.20)
0≤ t ≤ T ε
The error estimate theorem just proven is a basic tool. What is the significance of the result? The
deviation of the approximation, y(t), from the exact solution, x(t), over an extended time interval (of
O(1/ε)) is bounded by the combined effect of the error in the initial condition, ||α(ε)||, and of the
largest possible deviation of the integral over the error in the differential equation (4.11), ρ 0 (T/ε).
Thus, if the maximal deviation in the equation satisfies
ρ 0 ( ε)
⎯⎯
ε⎯
→ 0→ 0
ε
then ξ=x−y constitutes a small deviation, so that y(t) is a good approximation for x(t) for t≤O(1/ε).
For instance, assume that y(t) is given by
N
y(t ) = ∑ ε k y k
(4.21)
k= 1
where yk are bounded and that ρ0 , the maximum error in the equation, as well as α(ε), the error in
the initial condition, satisfy
ρ 0 = O(ε
N +1
)
α (ε ) = O(ε
N +1
)
(4.22)
(naturally, since y is given up to O(εN)). Despite the fact that the error in the initial condition is
O(eN+1), and that y(t) includes terms through O(εN), the overall error incurred in the approximation
is O(εN) (~ ρ 0 /ε). This is a typical example for the weakness of proofs that rely on the Gronwall
lemma. Unless additional information is provided, the bounds obtained are somewhat weak. In
fact, the following is easy to see.
Theorem
In an expansion through Nth order of an approximation y(‡) to the solution of Eq. (4.10), ρ 0 and
||α(ε)|| are both O(εN+1) (thus, the error estimate is O(εN)). In addition, the N+1st order term, yN+1,
is bounded relative to yN for all t≤O(1/ε). Then the error estimate can be improved:
N
x(t ) − ∑ ε k = O(ε N + 1 )
k=0
t ≤ O(1 ε )
(4.23)
-5-
Y. Zarmi
Proof
N
x(t ) − ∑ ε
k=0
k
= x(t ) −
N +1
∑ε
k
+ε
N +1
k=0
yN +1 ≤
(4.24)
N +1
x(t ) − ∑ ε k + ε N + 1 y N + 1 = O(ε N + 1 )

=0

k

 O( ε N +1 ) for t ≤O( 1 ε )
O (ε N + 1 ) for t ≤O (1 ε )
(boundedness of y N + 1 )
t ≤ O(1 ε )
(error estimate theorem)
Thus, additional information about the next order in the expansion (here, the fact that yN+1 is
bounded relative to yN), enables us to improve the error estimate in a given order. This explains
why the elimination of secular terms (terms that grow indefinitely in time in a perturbation
expansion), a topic discussed extensively in the following chapters, is so important - their
occurrence increases the estimated errors. The issue of error estimates is studied in [9-12] for
harmonic motion with a small nonlinear perturbation (i.e., second order equations in one
unknown).
4.3 Example-Harmonic oscillator with modified frequency
Consider the equation
x˙˙ + (1 + 2ε ) x = 0 ⎫ ⎧x = cos 1 + 2 ε t
(
)
⎪ ⎪
⎬⇒ ⎨
⎪˙
x (0) = 1, x˙(0 ) = 0 ⎪⎭ ⎩x = − 1 + 2ε sin ( 1 + 2ε t )
(4.25)
Now, go over to two first order coupled equations
x˙ = y
y˙ = − x − 2ε x
(4.26)
Transforming to polar coordinates, x = ρ cos θ, y = − ρ sin θ, we obtain
ρ˙ = ερ sin2θ
(4.27)
θ˙ = 1 + ε (1 + cos2θ )
(4.28)
In terms of the slow angular variable φ, where θ = t + φ, we find
ρ˙ = ερ sin2 (t + φ)
(4.27a)
φ˙ = ε (1 + cos2 (t + φ ))
(4.28a)
Let
ρ˜ = 1, φ˜ = ε t ⇒
x˜ = cos(1 + ε ) t, y˜ = − sin(1 + ε )t
be an approximation to the exact solution (see Eq. (4.25)).
approximation is
(4.29)
The equation satisfied by the
Nonlinear
Dynamics
-6-
(
)
⎛ ε ρ˜ sin 2 t + φ˜
d ⎛ ρ˜ ⎞ ⎛ 0⎞ ⎜
⎜ ⎟=⎜ ⎟=
dt ⎝ φ˜ ⎠ ⎝ ε ⎠ ⎜⎜ ε 1 + cos2 t + φ˜
⎝
(
(
( ) ⎞⎟
( )⎟⎠
⎛ −εsin 2 t + ˜φ
ρ(t;ε ) = ⎜
⎜ −ε cos2 t + ˜φ
⎝
⎛ 0⎞
α (ε ) = ⎜ ⎟
⎝ 0⎠
⇒
))
⎞
⎟ + ρ(t; ε )
⎟⎟
⎠
⇒
ρ( t;ε ) ≤ 2 ε
(4.30)
α (ε ) = 0
Consequently, based on the error estimate theorem,
⎛ ρ ⎞ ⎛ ˜ρ⎞
ξ(t ) = ⎜ ⎟ − ⎜ ⎟ ≤ α( ε) + ρ 0 (T ε ) ≤ 2T = O(1)
⎝ φ⎠ ⎝ φ˜ ⎠
(
)
(4.31)
Thus, for t=O(1/ε), the error estimate based on the theorem is O(1). This, as we shall see
immediately, is a poor estimate, which demonstrates the weakness of the error estimate theorem:
The bound it provides is based on minimal information. In the present example we have the best
possible information, namely, the exact solution. Therefore, we can check how good the
approximation (4.29) is directly, by expanding the solution of Eq. (4.24)
ρ=
x 2 + y 2 = 1 + ε (1 − cos2 1 + 2ε t )
(4.32)
which yields
ρ = 1 + O (ε )
for all t ≥ 0
Similarly,
−1 ⎛
y⎞
θ = tan ⎜ − ⎟ = tan −1 ( 1 + 2ε tan 1 + 2ε t ) =
⎝ x⎠
t + εt + 12 εsin 2t + 12 ε 2 t ( 2cos2t − 1) − 12 ε 2 (1 − 12 cos2 t ) +  =
t + εt + O(ε ),
(4.33)
t ≤ O(1 ε )
Thus,
φ = ε t + O( ε),
t ≤ O(1 ε )
(4.34)
In summary, while the error estimate theorem yields an O(1) estimate for t ≤ O(1/ε), the additional
information (in this case, knowledge of the actual solution) yields a far better error estimate.
4.4 Example-Precession of Mercury around the sun [9]
Kepler's laws constitute an excellent approximation for planetary motion. However, they are
slightly modified by general relativistic effects (the potential deviates slightly from the 1/r law).
The equations of motion for the radius, r(t), and the angle, θ(t), yield an elliptic orbit in the plane.
They can be reduced into a single equation relating r and θ: Measuring the radius in units of the
average radius,r (for Mercury it is roughly equal to 5.83×109 m), one defines
-7u(θ) ≡
Y. Zarmi
r
r (t )
(4.35)
and obtains for the unperturbed motion
GM r ⎞
⎛⎜
⎟
a=
⎝
l2 ⎠
d 2u
+u= a
dθ 2
(4.36)
Thus, the variable u obeys an oscillator's equation, with θ playing the role of "time". Here a is the
dimensionless short radius, G is the gravitational constant, M is the solar mass and l is the angular
momentum per unit mass of the planet. Eq. (4.36) is solved by
u = a + (b − a ) cosθ
(4.37)
where b is the dimensionless long radius.
The perturbation due to general relativistic effects modifies Eq. (4.36) into
d 2u
2
2 + u = a + εu
dθ
⎛
3G M
− 7⎞
⎜ε =
⎟ (4.38)
2 ≈ 10
⎝
cr
⎠
The formal solution of this equation is given by the equivalent integral equation:
θ
u = a + (b − a ) cosθ + ε ∫ sin(θ − τ ) u(τ ) dτ
2
(4.39)
0
In naive perturbation theory one substitutes the zero-order approximation for u(t) inside the integral
in Eq. (4.39). This leads to an expansion procedure that starts as follows:
θ
[
]
u = a + (b − a ) cosθ + ε ∫ sin(θ − τ ) a 2 + (b − a) cos 2 τ + 2 a (b − a) cosτ dτ + O(ε 2 )
0
2
(4.40)
= a + (b − a) cosθ
{(
+ε a2 +
+ O( ε 2 )
1
2
}
(b − a) 2 ) (1 − cosθ) + 16 (b − a ) 2 ( cosθ − cos 2θ) + a (b − a) θsin θ
≡ v + O (ε 2 )
Where v denotes the combined contribution of the zero and first orders.
Here we encounter an example of a "secular term". The term
ε a ( b − a )θ sinθ
in Eq. (4.40) becomes unbounded in time. Owing to its appearance, the O(ε) character of the first
order correction is retained only for t≤O(ε0 ). The origin of the concept is the fact that as ε is very
small, it takes this problematic term centuries to become sizable (siècle=century in French).
In terms of the error estimate theorem this can be looked upon as follows. If we use v(θ) as an
approximation for u(θ), then the equation satisfied by v will be
Nonlinear
Dynamics
-8d 2v
+ v = a + ε v 2 + ρ(t; ε )
dθ 2
(
ρ(t;ε ) = ε θ ⋅ O( ε
) + O(ε )) + 
0
(4.41)
0
where the O(ε0 ) terms in brackets are combinations (the exact structure of which is not important
for the present analysis) of constants and trigonometric functions. Thus, based on the error
estimate theorem, v(θ) is an O(ε0 ) approximation for u(θ) only for short times. Over times of
O(1/ε) it becomes a bad, O(1/ε), approximation - again, an indication of the weakness of the
theorem. Direct inspection of v(θ) indicates that it may be a better approximation than the one
concluded by the error estimate theorem by one order of ε (O(ε) and O(ε0 ), respectively). This can
be made even more outstanding by choosing a better approximation. Write
u=a+w
w( 0) = b − a
(4.42)
which, when substituted in Eq. (4.38) yields
d 2w
2
2 + w = ε (a + w)
dθ
Now go over to polar coordinates, to obtain
w = r cosΦ
dr
= −ε sin Φ (a + r cosΦ )
dθ
dΦ
cosΦ
=1 − ε
( a + r cosΦ)
dθ
r
(4.43)
dw
= −r sin Φ
dθ
r(θ = 0) = b − a
(4.44)
Φ(θ = 0 ) = 0
Separating the slow from the fast θ dependence in the angular variable, we find
Φ=θ+ϕ
⇒
dr
2
= −ε sin(θ + ϕ )( a + r cos(θ + ϕ ))
dθ
dϕ
cos(θ + ϕ )
2
= −ε
a + r cos(θ + ϕ ))
(
dθ
r
(4.45)
ϕ, the slow part of the phase, and the radius, r, vary very little as θ varies over a whole 2π cycle.
Thus, we can obtain an approximate solution by considering the averages of the right hand sides of
the equations in Eq. (4.45) over a period of 2π in θ. Averaging is carried out with r and ϕ frozen.
(This, essentially, is the method of averaging that will be discussed in detail later on).
dr
≅0
dθ
yielding
dϕ
≅ −ε a
dθ
(4.46)
-9r ≅ r0 ≡ b − a
Y. Zarmi
ϕ ≅ ϕ 0 ≡ −ε a θ
(4.47)
w ≅ w0 ≡ ( b − a ) cos(1 − ε a )θ
We now want to insert r0 and ϕ0 into Eqs. (4.45). Since they will not satisfy them exactly, the
error generating term ρ(t;ε) will arise:
2
dr0
= 0 = −ε sin(θ + ϕ 0 ) (a + r0 cos(θ + ϕ 0 )) + ρ r (t;ε )
dθ
cos(θ + ϕ 0 )
dϕ 0
2
= −ε a = −ε
a + r0 cos(θ + ϕ 0 )) + ρϕ (t;ε )
(
dθ
r0
(4.48)
2
⎛
⎞
sin(θ + ϕ0 ) (a + r0 cos(θ + ϕ 0 ))
⎛ρr ⎞
⎜
⎟
ρ(t;ε ) = ⎜ ⎟ = ε
2
ρ
⎜
⎟
⎝ ϕ⎠
⎝ (cos(θ + ϕ 0 ) r0 ) (a + r0 cos(θ + ϕ 0 )) − a⎠
⎛ ( a 2 + 14 r0 2 ) sin(θ + ϕ 0 ) + ar0 sin 2 (θ + ϕ 0 ) + 14 r0 2 sin 3(θ + ϕ 0 ) ⎞
⎜
⎟
= ε⎜ 1 2 3 2
⎟
1
⎜ (a + 4 r0 ) cos(θ + ϕ 0 ) + a cos2 (θ + ϕ 0 ) + 4 r0 cos3 (θ + ϕ 0 )⎟
⎝ r0
⎠
(4.49)
With no error in the initial conditions, the error estimate theorem yields
⎛ r(t ) ⎞ ⎛ r0 (t ) ⎞
⎜
⎟ −⎜
⎟ ≤ ρ0 (T ε )
⎝ Φ (t )⎠ ⎝ Φ 0 (t )⎠
t ≤T ε
(4.50)
where
ρ 0 = ε⋅ Const.
Thus, based on the theorem, the error is O(ε0 ) for t≤O(1/ε), again, weaker than what can be shown
directly. Indeed, let us define
⎛ r ⎞ ⎛ r0 ⎞
R=⎜ ⎟−⎜ ⎟
⎝ Φ ⎠ ⎝ Φ0 ⎠
This definition yields
dR d ⎛ ⎛ r ⎞ ⎛ r0 ⎞ ⎞
=
⎜⎜ ⎟ − ⎜ ⎟⎟ =
dθ dθ ⎝ ⎝ Φ⎠ ⎝ Φ 0 ⎠ ⎠
⎛ − sin(θ + Φ) (a + r cos(θ + Φ )) 2
⎞
⎜
⎟
ε ⎜ cos(θ + Φ) a + r cos(θ + Φ ) 2
⎟=
(
)
⎜−
+ a⎟
⎝
⎠
r
(4.51)
Nonlinear
Dynamics
-10-
⎛ (a 2 + 14 r 2 )sin(θ + Φ ) + a r sin2 (θ + Φ ) + 14 r 2 sin3(θ + Φ )⎞
⎜
⎟
−ε ⎜ 1 2
2
⎜ (a + 34 r ) cos(θ + Φ) + acos2 (θ + Φ ) + 14 r cos3(θ + Φ)⎟⎟
⎝r
⎠
The residual on the r.h.s. of Eq. (4.51) is proportional to ε. It depends on r and on trigonometric
functions of θ+Φ that average to zero over a 2π cycle in θ. The constant contribution, εa, in the
-11-
Y. Zarmi
lower component has been eliminated. The solution for R is obtained by integrating Eq. (4.51):
θ
⌠ ⎛ ( a 2 + 14 r 2 ) sin(τ + Φ ) + a r sin2 (τ + Φ) + 14 r 2 sin 3( τ + Φ)⎞
⎮⎜
⎟
R = −ε ⎮ ⎜ 1 2
dτ
2
3
1
⎮ ⎜ (a + 4 r ) cos( τ + Φ) + acos2 ( τ + Φ) + 4 r cos3 (τ + Φ )⎟⎟
⌡⎝ r
⎠
0
(4.52)
θ
⌠ ⎛ (a 2 + 14 r 2 ) sin χ + a r sin2 χ + 14 r 2 sin3χ ⎞
⎮⎜
⎟ dχ
= −ε ⎮ ⎜ 1 2
⎮ ⎜ (a + 34 r 2 ) cosχ + acos2 χ + 14 r cos3χ⎟⎟ 1 + dΦ
⎠
⌡⎝ r
dθ
0
Through the error estimate theorem, r0 is shown to be an O(ε0 ) approximation for r for t≤O(1/ε).
Thus, using Eq. (4.45) for dΦ/dθ, the integral in Eq. (4.52) becomes a product of ε times a
combination of trigonometric functions. As each of the latter is bounded by 1, R will be O(ε) (at
least for times O(1/ε) over which r0 and Φ0 do not vary appreciably). This better estimate is
achieved via the method of averaging, which singles out the problematic −εa term in Eq. (4.45) and
eliminates it from Eq. (4.51). Otherwise, this term would have generated a secular contribution,
−εaθ, in the approximate solution.
Exercises
4.1 Given
t
{
Z( t;ε ) ≤ ε A + ε ∫ B Z( s;ε ) + Z(s;ε )
0
2
}ds
t ≥ 0, 0 ≤ ε«1
A≥0
B≥0
prove that
Z( t;ε ) ≤
ε A Be B
B − ε A( e B − 1)
0 ≤ t ≤1 ε
4.2 Consider the Duffing equation
x( 0) = 1, x˙ (0) = 0
x˙˙ + x = ε x 3
Calculate the error for t≤(1/ε) that accumulates in the approximations:
a.
x = cost
b.
x = cos(1 + 38 ε) t
Hint: Convert the equation into first order equations for two unknowns, x and y, by defining
x˙ = y
y˙ = x˙˙ = − x + ε x 3
4.3 Consider the equation
Nonlinear
Dynamics
-12-
x˙˙ + ε cost x˙ 2 + x = 0
x( 0) = 1, x˙ (0) = 0
a) Transform it into a set of two first order equations in the variables x and y (≡ x˙ ).
b) Change the resulting equations into polar coordinates r, θ.
c) Change into the slow variables r, ϕ (θ=t+ϕ).
d) The solution of the equations obtained in step c) is approximated by
r˜ =
1
1 − 38 ε t
ϕ˜ = 0
(1) Calculate the approximations for x and y;
(2) Calculate ||α(ε)||, the norm of the deviation of the approximate solution at t=0 from the
initial condition;
(3) Calculate ||ρ(ε)||, the norm of the deviation between the exact equation and the equation for
rfi and ϕfi.
(4) Based on the error estimate theorem, what is the error incurred between the approximate
solution and the exact one and for what time range?
(5) What is the error estimate and for what time range if we are given that the approximation is
a first term in a series expansion in ε, obeying the conditions of the theorem of Eq. (4.23)?