TWO BI-ACCELERATOR IMPROVED WITH MEMORY SCHEMES

TWO BI-ACCELERATOR IMPROVED WITH MEMORY
SCHEMES FOR SOLVING NONLINEAR EQUATIONS
J. P. JAISWAL
Abstract. The present paper is devoted to the improvement of the
R-order convergence of with memory derivative free methods presented
by Lotfi et al. (2014) without doing any new evaluation. To achieve
this aim one more self-accelerating parameter is inserted, which is calculated with the help of Newton’s interpolatory polynomial. First theoretically it is proved that the R-order of convergence of the proposed
schemes are increased from 6 to 7 and 12 to 14, respectively without
adding any extra evaluation. Smooth as well as non-smooth examples
are discussed to confirm theoretical result and superiority of the proposed schemes.
Mathematics Subject Classification (2000). 65H05, 65B99.
Keywords and Phrases. Nonlinear equation, Lagrange’s interpolatory polynomial, Newton’s interpolatory polynomial, R-order of convergence, computational order of convergence.
1. Introduction
Finding the root of a nonlinear equation frequently occurs in scientific computation. Newton’s method is the most well-known method for
solving nonlinear equations and has quadratic convergence. However,
the existence of the derivative in the neighborhood of the required root
is compulsory for convergence of Newton’s method, which restricts its
applications in practice. To overcome on the this difficulty, Steffensen
replaced the first derivative of the function in the Newton’s iterate by
forward finite difference approximation. This method also possesses the
quadratical convergence and the same efficiency just like the Newton’s
method. Kung and Traub are pioneers in constructing optimal general
multistep methods without memory. Moreover, they conjectured any
multistep methods without memory using n function evaluations may
reach the convergence order at most 2n−1 [9]. Thus both Newton’s and
Steffenssen’s methods are optimal in the sense of Kung and Traub.
But the superiority of the Steffenssen’s method over Newton’s method
1
2
J. P. JAISWAL
is that it is derivative free. So it can be applied to non-differentiable
equations also. To compare iterative methods theoretically, Owtrowski
[12] introduced the idea of efficiency index and given by p1/n , where p is
the order of convergence and n number of function evaluations per iteration. In other words we can say that an iterative method with higher
efficiency index is more efficient. To improve convergence order as well
as efficiency index without adding any new function evaluations, Traub
in his book introduced the method with memory. In-fact he changed
Steffensen’s method slightly as follows (see [[19], pp. 185-187]):
x0 , γ0 are given suitably,
xk − xk−1
f or k ≥ 1,
f (xk ) − f (xk−1 )
γk f (xk )2
= xk −
, (k = 0, 1, 2, ...). (1.1)
f (xk + γk f (xk )) − f (xk )
γk = −
xk+1
The parameter γn is called self accelerator and method (1.1) has Rorder of convergence 2.414. The possibility to increase the convergence
order can not be denied by using more suitable parameters. Many
authors during the last few years, are tried to construct iterative methods without memory which support this conjecture with optimal order
[[1], [6], [7], [16], [2], [3], [5], [8], [15], [17]] etc. are very few of them.
Although construction of optimal methods without memory is still an
active field, but from last one year many authors are shifting their
attentions for developing more efficient methods with memory.
In the convergence analysis of the new method, we employ the notation used in Traub’s book [19]: if mk and nk are null sequences and
mk /nk → C, where C is a non-zero constant, we shall write mk = O(nk )
or mk ∼ Cnk . We also use the concept of R-order of convergence
introduced by Ortega and Rheinboldt [11]. Let xk be a sequence of approximations generated by an iterative method (IM). If this sequence
converges to a zero ξ of function f with the R-order OR ((IM ), ξ) ≥ α,
we will write
ek+1 ∼ Ak,α eαk ,
where Ak,α tends to the asymptotic error constant Aα of the iterative
method (IM) when k → ∞.
The rest of the paper is organized as follows: in Section 2 we describe
the existing two-and three-point with memory derivative free schemes
and then their convergence orders are accelerated from six to seven
and twelve to fourteen, respectively without doing any extra evaluation.
Proposed methods are obtained by imposing one more suitable iterative
TWO BI-ACCELERATOR IMPROVED WITH MEMORY SCHEMES ...
3
parameter. The parameter is calculated using Newton’s interpolatory
polynomial. The numerical study also presented in the next section to
confirm the theoretical results. Finally, we give the concluding remarks.
2. Brief Literature Review and Improved with Memory
Schemes
Two step (double) and three-step (triple) Newton’s methods can be
respectively written as
f (xk )
, k = 0, 1, 2, ...
f 0 (xk )
f (yk )
= yk − 0
f (yk )
y k = xk −
xk+1
(2.1)
and
f (xk )
, k = 0, 1, 2, ...
f 0 (xk )
f (yk )
,
= yk − 0
f (yk )
f (zk )
= zk − 0
.
f (zk )
y k = xk −
zk
xk+1
(2.2)
The orders of convergence of the schemes (2.1) and (2.2) have been
increased to four and eight, respectively but both of them have no improvement in efficiency as compared to the original Newton’s method.
One major draw back of the same schemes is that they also involved
derivatives. For the purpose of obtaining efficient as well as free from
derivatives schemes, Lotfi et al. [10] approximated the derivatives
f 0 (xk ), f 0 (yk ) and f 0 (zk ) by Lagrange’s interpolatory polynomials of
degree one, two and three respectively, which are given by
f 0 (xk ) ≈ P10 (xk ) =
f (xk )
f (wk )
+
= f [xk , wk ],
xk − w k w k − xk
(2.3)
where wk = xk + γf (xk ) and γ ∈ R − {0},
f 0 (yk ) ≈ P20 (yk ) =
2yk − wk − yk
f (xk )
(xk − wk )(xk − yk )
2yk − xk − yk
+
f (wk )
(wk − xk )(wk − yk )
2yk − wk − xk
+
f (yk )
(yk − wk )(yk − xk )
(2.4)
4
J. P. JAISWAL
and
f 0 (zk ) ≈ P30 (zk ) =
wk yk + wk zk + yk zk − 2(wk + yk + zk )zk + 3zk2
f (xk )
(xk − wk )(xk − yk )(xk − zk )
xk yk + xk zk + yk zk − 2(xk + yk + zk )zk + 3zk2
f (wk )
+
(wk − xk )(wk − yk )(wk − zk )
xk wk + xk zk + wk zk − 2(xk + wk + zk )zk + 3zk2
+
f (yk )
(yk − wk )(yk − xk )(yk − zk )
xk wk + xk yk + wk yk − 2(xk + wk + yk )zk + 3zk2
+
f (zk ).
(zk − wk )(zk − xk )(zk − yk )
(2.5)
Thus modified versions of schemes (2.1) and (2.2) respectively become
f (xk )
, k = 0, 1, 2, ...
P10 (xk )
f (yk )
= yk − 0
P2 (yk )
y k = xk −
xk+1
(2.6)
and
f (xk )
, k = 0, 1, 2, ...
P10 (xk )
f (yk )
= yk − 0
,
P2 (yk )
f (zk )
.
= zk − 0
P3 (zk )
y k = xk −
zk
xk+1
(2.7)
The authors have shown that without memory methods (2.6) and (2.7)
preserve the convergence order with reduced number of function evaluation. Their corresponding error expressions are given by
1
ek+1 = 3 (1 + γc1 )2 c2 (c22 − c1 c3 )e4k + O(e5k )
(2.8)
c1
and
1
ek+1 = 7 (1 + γc1 )4 c22 (c22 − c1 c3 )(c32 − c1 c2 c3 + c21 c4 )e8k + O(e9k ), (2.9)
c1
(i)
respectively, where ci = f i!(ξ) . The above two without memory schemes
are optimal in the sense of Kung and Traub. Initially, Traub showed
that the convergence of the without memory methods can be increased
by use of information from current and previous iterations without
adding any evaluation, which are known as with memory methods. To
get increased order of convergence the authors in the same paper [10]
TWO BI-ACCELERATOR IMPROVED WITH MEMORY SCHEMES ...
5
first replaced γ by γk and then γk = −1/f¯0 (ξ) for k = 1, 2, ..., where ξ
is the exact root and f¯0 (ξ) is the approximation of f 0 (ξ). In addition,
they used the approximation
1
1
γk = − ¯0
,
=− 0
N3 (xk )
f (ξ)
(2.10)
for method (2.6) and
1
1
γk = − ¯0
,
=− 0
N4 (xk )
f (ξ)
(2.11)
for method (2.7), where
N3 (t) = f (xk ) + f [xk , xk−1 ](t − xk )
+f [xk , xk−1 , yk−1 ](t − xk )(t − xk−1 )
+f [xk , xk−1 , yk−1 , wk−1 ](t − xk )(t − xk−1 )(t − yk−1 ),
(2.12)
and
N4 (t) = f (xk ) + f [xk , zk−1 ](t − xk )
+f [xk , zk−1 , yk−1 ](t − xk )(t − zk−1 )
+f [xk , zk−1 , yk−1 , xk−1 ](t − xk )(t − zk−1 )(t − yk−1 )
+f [xk , zk−1 , yk−1 , xk−1 , wk−1 ](t − xk )(t − zk−1 )(t − yk−1 )(t − xk−1 ),
(2.13)
are Newton’s interpolatory polynomial of degree three and four, respectively. Here the single prime (0 ) denotes the first derivative and
double prime (00 ) will later denote the second derivative. So that the
one-parametric version of the methods (2.6) and (2.7) can be written
as
wk = xk + γk f (xk ), γk+1 = −
f (xk )
,
P10 (xk )
f (yk )
= yk − 0
P2 (yk )
1
N30 (xk )
, k = 0, 1, 2, ...
y k = xk −
xk+1
(2.14)
6
J. P. JAISWAL
and
wk = xk + γk f (xk ), γk+1 = −
1
N40 (xk )
, k = 0, 1, 2, ...
f (xk )
,
P10 (xk )
f (yk )
= yk − 0
,
P2 (yk )
f (zk )
.
= zk − 0
P3 (zk )
y k = xk −
zk
xk+1
(2.15)
The authors showed that the convergence order of methods (2.14) and
(2.15) are increased from 4 to 6 and from 8 to 12, respectively. The aim
of this article is to find more efficient methods using the same number
of evaluations. For this purpose we introduce one more iterative parameter in the above methods then the modified with memory methods
are given by :
wk = xk + γk f (xk ), γk+1 = −
, k = 0, 1, 2, ...
f (xk )
,
+ αk f (wk )
f (yk )
= yk − 0
P2 (yk ) + αk f (yk )
y k = xk −
xk+1
1
N30 (xk )
P10 (xk )
(2.16)
with its error expression
ek+1 =
1
(1 + γk c1 )2 (αk c1 + c2 )(αk2 c1 + c22 + c1 (2αk c2 − c3 ))e4k + O(e5k )
c31
(2.17)
and
wk = xk + γk f (xk ), γk+1 = −
f (xk )
,
+ αk f (wk )
f (yk )
= yk − 0
,
P2 (yk ) + αk f (yk )
f (zk )
= zk − 0
.
P3 (zk ) + αk f (zk )
y k = xk −
zk
xk+1
1
N40 (xk )
, k = 0, 1, 2, ...
P10 (xk )
(2.18)
TWO BI-ACCELERATOR IMPROVED WITH MEMORY SCHEMES ...
7
with its error expression
ek+1 =
1
(1 + γk c1 )4 (αk c1 + c2 )2 (αk2 c1 + c22 + c1 (2αk c2 − c3 ))
7
c1
(αk c31 + c32 + c1 c2 (3αk c2 − c3 ) + c31 (3αk2 c2 − αk c3 + c4 ))e8k + O(e9k ),
(2.19)
Since the above error equations contain the iterative parameters both
γk and αk , we should approximate these parameters in such a way that
they will increase the convergence order. To this end, we approximate
γk and αk as follows:
For method (2.16)
1
1
1
≈− =− 0
,
c1
ce1
N3 (xk )
c2
c2
N 00 (wk )
= − ≈ − = − 40
c1
c1
2N4 (wk )
γk = −
αk
(2.20)
and for method (2.18)
1
1
1
,
≈− =− 0
˜4 (xk )
c1
c˜1
N
z}|{
˜ 00 (wk )
c2
c2
N
= − ≈ − z}|{ = − 5 0
.
˜5 (wk )
c1
2N
c1
γk = −
αk
(2.21)
where
N3 (t) = N3 (t; xk , yk−1 , xk−1 , wk−1 )
= f (xk ) + f [xk , yk−1 ](t − xk )
+f [xk , yk−1 , xk−1 ](t − xk )(t − yk−1 )
+f [xk , yk−1 , xk−1 , wk−1 ](t − xk )(t − yk−1 )(t − xk−1 ),
N4 (t) = N4 (t; xk , wk , yk−1 , xk−1 , wk−1 )
= f (xk ) + f [xk , wk ](t − xk )
+f [xk , wk , yk−1 ](t − xk )(t − wk )
+f [xk , wk , yk−1 , xk−1 ](t − xk )(t − wk )(t − yk−1 ),
+f [xk , wk , yk−1 , xk−1 , wk−1 ](t − xk )(t − wk )(t − yk−1 )(t − xk−1 )
8
J. P. JAISWAL
and
˜4 (t) = N
˜4 (t; xk , zk−1 , yk−1 , xk−1 , wk−1 )
N
= f (xk ) + f [xk , zk−1 ](t − xk )
+f [xk , zk−1 , yk−1 ](t − xk )(t − zk−1 )
+f [xk , zk−1 , yk−1 , xk−1 ](t − xk )(t − zk−1 )(t − yk−1 ),
+f [xk , zk−1 , yk−1 , xk−1 , wk−1 ](t − xk )(t − zk−1 )(t − yk−1 )(t − xk−1 ),
˜5 (t) = N
˜5 (t; xk , wk , zk−1 , yk−1 , xk−1 , wk−1 )
N
= f (xk ) + f [xk , wk ](t − xk )
+f [xk , wk , zk−1 ](t − xk )(t − wk )
+f [xk , wk , zk−1 , yk−1 ](t − xk )(t − wk )(t − zk−1 )
+f [xk , wk , zk−1 , yk−1 , xk−1 ](t − xk )(t − wk )(t − zk−1 )(t − yk−1 ),
+f [xk , wk , zk−1 , yk−1 , xk−1 , wk−1 ](t − xk )(t − wk )(t − zk−1 )(t − yk−1 )(t − xk−1 )
are Newton’s interpolatory polynomial of degree three, four and four,
five respectively. Now we denote
ek = xk − ξ, ek,z = zk − ξ, ek,y = yk − ξ, ek,w = wk − ξ,
where ξ is the exact root. Before going to prove the main result, we
state the following two lemmas which can be obtained by using the
error expression of Newton’s interpolation, in the same manner as in
[4].
N 00 (w )
1
Lemma 2.1. If γk = − N 0 (x
and αk = − 2N40 (wkk ) , then the estimates
3 k)
4
c4
(i) 1 + γk c1 ∼ ek−1,y ek−1,w ek−1 ,
c1
(ii) αk c1 + c2 ∼ −c5 ek−1,y ek−1,w ek−1 .
1
Lemma 2.2. If γk = − N˜ 0 (x
4
˜ 00 (w )
N
k)
and αk = − 2N˜50 (wk ) , then the estimates
5
k
c5
(i) 1 + αk c1 ∼ − ek−1,z ek−1,y ek−1,w ek−1 ,
c1
(ii) βk c1 + c2 ∼ c6 ek−1,z ek−1,y ek−1,w ek−1 .
The theoretical proof of the order of convergence of the proposed
methods is given by the following theorem:
Theorem 2.1. If an initial approximation x0 is sufficiently close to
a simple zero ξ of f (x) and the parameters γk and αk in the iterative
scheme (2.16) and (2.18) are recursively calculated by the forms given
in (2.20) and (2.21), respectively. Then the R-orders of convergence of
with memory schemes (2.16) and (2.18) are at least seven and fourteen,
respectively.
TWO BI-ACCELERATOR IMPROVED WITH MEMORY SCHEMES ...
9
Proof. First, we assume that the R-orders of convergence of the sequences xk , wk , yk , zk are at least m, m1 , m2 and m3 , respectively.
Hence
2
m
m
m
m
ek+1 ∼ Ak,m em
k ∼ Ak,m (Ak−1,m ek−1 ) ∼ Ak,m Ak−1,m ek−1 .
(2.22)
and
mm1
m1
m
1
1
∼ Ak,m1 Am
ek,w ∼ Ak,m1 em
k−1,m ek−1 . (2.23)
k ∼ Ak,m1 (Ak−1,m ek−1 )
Similarly
mm2
2
ek,y ∼ Ak,m2 Am
k−1,m ek−1 ,
(2.24)
mm3
3
ek,z ∼ Ak,m3 Am
k−1,m ek−1 .
(2.25)
Now we will prove the results in two parts. First for method (2.16) and
then for (2.18).
Modified method I. For method (2.16), it can be derived that
ek,w ∼ (1 + γk c1 )ek ,
ek,y ∼ L1 (1 + γk c1 )(αk c1 + c2 )e2k
1
where L1 = ,
c1
ek+1
L2 (1 + γk c1 )2 (αk c1 + c2 )e4k ,
1
where L2 = 3 (αk2 c1 + c22 + c1 (2αk c2 − c3 )).
c1
(2.26)
(2.27)
∼
(2.28)
Using the results of lemma (2.1) in the equations (2.26), (2.27) and
(2.28), we have
c4
2 +m1 +m+1
ek,w ∼
(Ak−1,m2 )(Ak−1,m1 )(Ak−1,m )em
, (2.29)
k−1
c1
ek,y ∼ −
c4 c5
2 +2m1 +2m+2
L1 (A2k−1,m2 )(A2k−1,m1 )(A2k−1m )e2m
,
k−1
c1
(2.30)
and
ek+1 ∼ −
c24 c5
2 +3m1 +4m+3
L2 (A3k−1,m2 )(A3k−1,m1 )(A4k−1,m )e3m
.
k−1
c21
(2.31)
10
J. P. JAISWAL
Now comparing the equal powers of ek−1 in three pairs of equations
(2.29)-(2.23); (2.30)-(2.24) and (2.31)- (2.22), we get the following nonlinear system
mm1 − m2 − m1 − m − 1 = 0,
mm2 − 2m2 − 2m1 − 2m − 2 = 0,
m2 − 3m2 − 3m1 − 4m − 3 = 0.
After solving these equations, we get m = 7, m2 = 3, m1 = 2. It
confirms the convergence of method (2.16) This shows the first part.
Modified method II. For method (2.18), it can be derived that
ek,w ∼ (1 + αk c1 )ek ,
ek,y ∼ L1 (1 + γk c1 )(αk c1 + c2 )e2k
1
where L1 = ,
c1
ek,z
ek+1
L2 (1 + γk c1 )2 (αk c1 + c2 )e4k ,
1
where L2 = 3 (αk2 c1 + c22 + c1 (2αk c2 − c3 )).
c1
(2.32)
(2.33)
∼
(2.34)
∼
O3 (1 + γk c1 )4 (αk c1 + c2 )2 e8k ,
where O3 = (αk2 c1 + c22 + c1 (2αk c2 − c3 )) ×
(αk c31 + c32 + c1 c2 (3αk c2 − c3 ) + c31 (3αk2 c2 − αk c3 + c4 )).
(2.35)
Now using the results of lemma (2.2) in the equations (2.32), (2.33),
(2.34) and (2.35), we have
c5
3 +m2 +m1 +m+1
,
ek,w ∼ − (Ak−1,m3 )(Ak−1,m2 )(Ak−1,m1 )(Ak−1,m )em
k−1
c1
(2.36)
ek,y ∼ −
ek,z ∼
c5 c6
3 +2m2 +2m1 +2m+2
,
L1 (A2k−1,m3 )(A2k−1,m2 )(A2k−1,m1 )(A2k−1m )e2m
k−1
c1
(2.37)
c25 c6
c21
3 +3m2 +3m1 +4m+3
L2 (A3k−1,m3 )(A3k−1,m2 )(A3k−1,m1 )(A4k−1,m )e3m
.
k−1
(2.38)
TWO BI-ACCELERATOR IMPROVED WITH MEMORY SCHEMES ...
11
and
ek+1 ∼
c45 c26
c41
3 +6m2 +6m1 +8m+6
.
O3 (A6k−1,m3 )(A6k−1,m2 )(A6k−1,m1 )(A8k−1,m )e6m
k−1
(2.39)
Comparing the equal powers of ek−1 in four pairs of equations (2.36)(2.23); (2.37)-(2.24); (2.38)-(2.25) and (2.39)-(2.22), we get the following nonlinear system
mm1 − m3 − m2 − m1 − m − 1 = 0,
mm2 − 2m3 − 2m2 − 2m1 − 2m − 2 = 0,
mm3 − 3m3 − 3m2 − 3m1 − 4m − 3 = 0,
m2 − 6m3 − 6m2 − 6m1 − 8m − 6 = 0.
After solving these equations we get m = 14, m3 = 7, m2 = 3, m1 = 2.
And thus proof is completed.
Note 1.: The efficiency index of the proposed method (2.16) along
with (2.20) is 71/3 = 1.9129 which is more than 61/3 = 1.8171 of method
(2.14).
Note 2.: The efficiency index of the proposed method (2.18) along
with (2.21) is 141/4 = 1.9343 which is more than 81/4 = 1.8612 of
method (2.15).
3. Numerical Examples and Conclusion
In this section the proposed derivative free methods are applied to
solve smooth as well as non-smooth nonlinear equations and compared
with the existing with memory methods. Nowadays, high-order methods are important because numerical applications use high precision in
their computations; for this reason numerical tests have been carried
out using variable precision arithmetic in MATHEMATICA 8 with 700
significant digits. The computational order of convergence (COC) is
defined by [13], [14]]
COC =
ln(|f (xk )/f (xk−1 )|)
.
ln(|f (xk−1 )/f (xk−2 )|)
To test the performance of new method consider the following three
nonlinear functions (which are taken from [10] and [6]):
2
1. f1 (x) = sin(πx)ex +x cos(x)−1 + x log (x sin(x) + 1),
3
2. f2 (x) = ex −x − cos(x2 − 1) + x3 + 1,
12
J. P. JAISWAL
(
10(x4 + x),
x<0
3.f3 (x) =
3
−10(x + x), x ≥ 0.
The absolute errors for the first three iterations are given in Table 1.
In the table ae ± b stands for a × 10±b . Note that a large number
of three-step derivative free (with and without memory) methods are
available in the literature. But the methods which have been tested for
non-smooth functions are rare and this clearly prove the significance of
this article.
Table 1. Comparison of the absolute error in first, second and third iteration.
Function
f1
ξ=0
x0 = 0.6
Method
(2.14) with γ0 = −1
(2.15) with γ0 = −1
(2.16) with γ0 = −1 and
α0 = −0.1
(2.18) with γ0 = −1 and
α0 = −0.1
f2
(2.14) with γ0 = 1
ξ = −1
(2.15) with γ0 = 1
x0 = −1.65 (2.16) with γ0 = 1 and
α0 = 0.65
(2.18) with γ0 = 1 and
α0 = 0.65
f3
(2.14) with γ0 = 1
ξ = −1
(2.15) with γ0 = 1
x0 = −0.8
(2.16) with γ0 = 1 and
α0 = −0.3
(2.18) with γ0 = 1 and
α0 = −0.3
|x1 − ξ|
|x2 − ξ|
|x3 − ξ|
COC
0.67423e-2 0.72063e-12 0.23027e-72 6.0653
0.23448e-3 0.56614e-45 0.86233e-543 11.962
0.29657e-2 0.28414e-14 0.99885e-101 7.1926
0.98901e-4 0.14013e-44 0.41159e-628 14.285
0.10743e-0 0.40447e-5 0.24898e-34 6.6381
0.23223e-1 0.62952e-20 0.77469e-241 11.901
0.21296e-0 0.50014e-4 0.73645e-32 7.7919
0.67736e-1 0.22204e-15 0.55387e-214 13.725
0.13702e-0 0.85922e-4 0.99097e-23 5.7102
0.35840e-1 0.25132e-14 0.61401e-172 11.954
0.19721e-0 0.19721e-4 0.33883e-29 6.6720
0.68741e-1 0.33307e-15 0.26137e-215 13.922
The effectiveness of the new proposed derivative free with memory
methods is confirmed by comparing this with existing with memory
family. The numerical results showed in Table 1 are in concordance
with the theory developed here. From the the theoretical result, we
can conclude that the order of convergence of the without memory
family can be made more higher than the existing with memory family
by imposing more self-accelerating parameter without any additional
calculations and the computational efficiency of the presented with
memory method is high. The R-orders of convergence are increased
TWO BI-ACCELERATOR IMPROVED WITH MEMORY SCHEMES ...
13
from 6 to 7 and 12 to 14 in accordance with the quality of the applied accelerating method proposed in this paper. We can see that the
self-accelerating parameters play a key role in increasing the order of
convergence of the iterative method.
4. Acknowledgment
The author is grateful to editor and reviewers for their significant
suggestions for the improvement of the paper.
References
[1] S. Artidiello, F. Chicharro, A. Cordero and J. R. Torregrosa: Local convergence and dynamical analysis of a new family of optimal fourth-order iterative
methods, International Journal of Computer Mathematics, 90 (10) (2013),
2049-2060.
[2] A. Cordero and J. R. Torregrosa: A class of Steffensen type methods with optimal order of convergence, Applied Mathematics and Computation 217 (2011),
7653-7659.
[3] C. Chun and B. Neta: An analysis of a new family of eighth-order optimal
methods, Applied Mathematics and Computation 245 (2014), 86-107.
[4] J. Dzunic: On efficient two-parameter methods for solving nonlinear equations,
Numer. Algor. 63 (2013), 549-569.
[5] M. A. Hafiz: Solving Nonlinear Equations Using Steffensen-Type Methods
With Optimal Order of Convergence, Palestine Journal of Mathematics, Vol.
3(1) (2014) , 113-119.
[6] M. A. Hafiz and Mohamed S. M. Bahgat: Solving nonsmooth equations using
family of derivative-free optimal methods, Journal of the Egyptian Mathematical Society 21 (2013), 38-43.
[7] J. P. Jaiswal: Some class of third-and fourth-order iterative methods for solving
nonlinear equations, Journal of Applied Mathematics, Volume 2014, Article ID
817656, 17 pages.
[8] Y. I. Kim: A triparametric family of three-step optimal eighth-order methods
for solving nonlinear equations, International Journal of Computer Mathematics, 89 (8) (2012), 1051-1059.
[9] H. T. Kung and J. F. Traub: Optimal order of one-point and multipoint
iteration, J. ACM 21 (1974), 643-651.
[10] T. Lotfi, F. Soleymani, Z. Noori, A. Kilicman and F. K. Haghani: Efficient
iterative methods with and without memory possessing high efficiency indices,
Discrete Dynamics in Nature and Society, Volume 2014, Article ID 912796, 9
pages.
[11] J. M. Ortega and W. C. Rheinboldt: Iterative solution of nonlinear equations
in several variables, Academic Press, New York (1970).
[12] A. M. Owtrowski: Solution of equations and systems of equations, Academic
Press, New York (1960).
[13] M. S. Petkovic: Remarks on ”On a general class of multipoint root-finding
methods of high computational efficiency”, SIAM J. Numer. Math. 49 (2011),
1317-1319.
14
J. P. JAISWAL
[14] M. S. Petkovic, B. Neta, L. D. Petkovic and J. Dzunic: Multipoint methods
for solving nonlinear equations, Elsevier (2012).
[15] J. R. Sharma and H. Arora: An efficient family of weighted-Newton methods with optimal eighth order convergence, Applied Mathematics Letters 29
(2014), 1-6.
[16] J. R. Sharma and R. K. Guha: Second-derivative free methods of third and
fourth order for solving nonlinear equations, International Journal of Computer
Mathematics, 88 (1) (2011), 163-170.
[17] A. Singh and J. P. Jaiswal: An efficient family of optimal eighth-order iterative methods for solving nonlinear equations and its dynamics, Journal of
Mathematics, Volume 2014, Article ID 569719, 14 pages.
[18] F. Soleymani, F. Soleymani and S. Shateyi: Some iterative methods free from
derivatives and their basins of attraction for nonlinear equations, Discrete Dynamics in Nature and Society, Volume 2013, Article ID 301718, 10 pages.
[19] J. F. Traub: Iterative methods for the solution of equations, Prentice-Hall,
Englewood Cliffs, New Jersey (1964).
Jai Prakash Jaiswal
Department of Mathematics,
Maulana Azad National Institute of Technology,
Bhopal, M.P., India-462051.
E-mail: [email protected].