Skip to main content

On new sixth and seventh order iterative methods for solving non-linear equations using homotopy perturbation technique

Abstract

Objectives

This paper proposes three iterative methods of order three, six and seven respectively for solving non-linear equations using the modified homotopy perturbation technique coupled with system of equations. This paper also discusses the analysis of convergence of the proposed iterative methods.

Results

Several numerical examples are presented to illustrate and validation of the proposed methods. Implementation of the proposed methods in Maple is discussed with sample computations.

Introduction

The applications of non-linear equations of the type \(f(x)=0\) arise in various branches of pure and applied sciences, engineering and computing. In resent time, several scientists and engineers have been focused to solve non-linear equations numerically as well as analytically. In the literature, there are several iterative methods/algorithms available which are derived from different techniques such as homotopy, interpolation, Taylor’s series, quadrature formulas, decomposition etc., and also available various modifications and improvements of the existing methods, and different hybrid iterative methods, see, for example [1, 4,5,6,7, 9,10,11,12,13,14,15,16, 28,29,30,31,32, 36,37,38]. In general, the roots of non-linear or transcendental equations cannot be expressed in closed form or cannot be computed analytically. The root-finding algorithms provide us to compute approximations to the roots, these approximations are expressed either as small isolating intervals or as floating point numbers. In this paper, we use the modified homotopy perturbation technique (HPT) to create a number of iterative methods for solving the given non-linear equations with converging order more than or equal to three. The given non-linear equations are expressed as an equivalent coupled system of equations with help of the Taylor’s series and technique of He [4]. This enables us to express the given non-linear equation as a sum of linear and non-linear equations. The Maple implementation of the proposed algorithm is also discussed, and various Maple implementations for differential and transcendental equations are available in the literature, see, for example [17,18,19,20,21,22,23,24,25,26,27].

The rest of paper is organized as follows: Section  recalls the preliminary concepts related to the topic; In Section , we present the methodology and steps involving in the proposed algorithms; Section  discuses the analysis of convergence to show the order of proposed methods are more than or equal to three; Section  presents several numerical examples to illustrate and validate the proposed methods/algorithms; and finally Section  presents the Maple implementation of the proposed algorithms with sample computations.

Preliminaries

In this paper, we consider the non-linear equation of the type

$$\begin{aligned} f(x) = 0. \end{aligned}$$
(1)

Iterations techniques are a common approach widely used in various numerical algorithms/methods. It is a hope that an iteration in the general form of \(x_{n+1}=g(x_n)\) will eventually converge to the true solution \(\alpha\) of the problem (1) at the limit when \(n\rightarrow \infty\). The concern is whether this iteration will converge, and, if so, the rate of convergence. Specifically we use the following expression to represent how quickly the error \(e_n=\alpha -x_n\) converges to zero. Let \(e_n=\alpha -x_n\) and \(e_{n+1} = \alpha -x_{n+1}\) for \(n \ge 0\) be the errors at n-th and \((n+1)\)-th iterations respectively. If two positive constants \(\mu\) and p exist, and

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }\frac{\vert e_{n+1}\vert }{\vert e_n \vert ^p}=\frac{\vert \alpha -x_{n+1}\vert }{\vert \alpha -x_n\vert ^p}=\mu , \end{aligned}$$
(2)

then the sequence is said to converge to \(\alpha\). Here \(p\ge 1\) is called the order of convergence, the constant \(\mu\) is the rate of convergence or asymptotic error constant. This expression may be better understood when it is interpreted as \(\vert e_{n+1}\vert =\mu \vert e_n\vert ^p\) when \(n\rightarrow \infty\). Obviously, the larger p and the smaller \(\mu\), the more quickly the sequence converges.

Theorem 1

[3] Suppose that \(\phi \in C^p[a,b]\). If \(\phi ^{(k)}(x)=0\), for \(k=0, 1, 2, \ldots , p-1\) and \(\phi ^{(p)}(x) \ne 0\), then the sequence \(\{x_n\}\) is of order p.

This paper focus on developing iterative methods/algorithms that are having the order of converges three, six and seven respectively. The following section presents the proposed methods using Taylor’s series and modified HPT.

Main text

In this section, we present new iterative methods and its order of convergences with numerical examples, maple implementation and sample computations using maple mathematical software tool.

New iterative methods

We assume that \(\alpha\) is an exact root of the equation (1) and let a be an initial approximation (sufficiently close) to \(\alpha\). We can rewrite the non-linear equation (1) using Taylor’s series expansion as coupled system

$$\begin{aligned} f(x)= f(a) + (x-a)f'(a) + \frac{(x-a)^2}{2} f''(a) + G(x) = 0~~\text {or} \end{aligned}$$
(3)
$$\begin{aligned} G(x)= f(x) - f(a) - (x-a)f'(a) - \frac{(x-a)^2}{2} f''(a). \end{aligned}$$
(4)

We have, from Newton’s method, that

$$\begin{aligned} {\begin{matrix}x=a-\frac{f(a)}{f'(a)} \implies x-a = -\frac{f(a)}{f'(a)} \implies (x-a)^2 = \left( -\frac{f(a)}{f'(a)}\right) ^2. \end{matrix}} \end{aligned}$$
(5)

From (4) and (5), we have

$$\begin{aligned} G(x) = f(x) - f(a) - (x-a)f'(a) - \frac{1}{2} \left( -\frac{f(a)}{f'(a)}\right) ^2 f''(a). \end{aligned}$$
(6)

We can write (6) in the following form

$$\begin{aligned} x = a- \frac{f(a)}{f'(a)}-\frac{(f(a))^2f''(a)}{2(f'(a))^3} - \frac{G(x)}{f'(a)}. \end{aligned}$$

It can be expressed in the form of

$$\begin{aligned} x=c+T(x), \end{aligned}$$
(7)

where

$$\begin{aligned}c = a- \frac{f(a)}{f'(a)}-\frac{(f(a))^2f''(a)}{2(f'(a))^3}, \end{aligned}$$
(8)
$$\begin{aligned}T(x) = - \frac{G(x)}{f'(a)}. \end{aligned}$$
(9)

Here T(x) is a non-linear operator. It is clear, from relation (4), that

$$\begin{aligned} G(x_0) = f(x_0). \end{aligned}$$
(10)

Note that the equation (10) will play important role in the derivation of the iteration methods, see for example [2]. We use the technique of homotopy perturbation to develop the proposed iterative algorithms to solve the given non-linear equation (1). Using the HPT, we can construct a homotopy \(H(\upsilon ,p,m) : \mathbb {R} \times [0,1] \times \mathbb {R} \rightarrow \mathbb {R}\) satisfying

$$\begin{aligned} H(\upsilon ,p,m) = \upsilon -c-pT(\upsilon ) - p(1-p)m=0, \end{aligned}$$
(11)

where \(p\in [0,1]\) is embedding parameter and \(m \in \mathbb {R}\) is unknown number. Clearly, from (11), we have

$$\begin{aligned}H(\upsilon ,0,m) = \upsilon -c=0, ~~\text {and}\\H(\upsilon ,1,m) = \upsilon -c-T(\upsilon )=0. \end{aligned}$$

Hence, the parameter p is monotonically increases on [0, 1]. The solution of equation (11) can be expressed as a power series in p

$$\begin{aligned} \upsilon = \sum _{i=0}^{\infty } v_ip^i. \end{aligned}$$
(12)

Now the approximate solution of (1) is

$$\begin{aligned} x=\lim \limits _{p \rightarrow 1} \upsilon = \sum _{i=0}^{\infty } x_i. \end{aligned}$$
(13)

One can express the equation (11), as follows, by expanding T(x) using Taylor’s series expansion around \(x_0\),

$$\begin{aligned} \upsilon -c-p \left[ T(x_0) + (\upsilon -x_0)T'(x_0) + \frac{(\upsilon -x_0)^2}{2} T''(x_0) + \cdots \right] - p(1-p)m=0. \end{aligned}$$
(14)

By Putting (12) in (14), we get

$$\begin{aligned} {\begin{matrix} {} \sum _{i=0}^{\infty } v_ip^i-c - p(1-p)m \\ {}\quad - p \left[ T(x_0) + \left( \sum _{i=0}^{\infty } v_ip^i-x_0 \right) T'(x_0) + \left( \sum _{i=0}^{\infty } v_ip^i-x_0 \right) ^2 \frac{T''(x_0)}{2} + \cdots \right] = 0. \end{matrix}} \end{aligned}$$
(15)

By comparing the coefficients of powers of p, we get

$$\begin{aligned}p^0: x_0 - c=0 \end{aligned}$$
(16)
$$\begin{aligned}P^1: x_1 - T(x_0) - m =0 \end{aligned}$$
(17)
$$\begin{aligned}p^2: x_2 - x_1T'(x_0) + m = 0 \end{aligned}$$
(18)
$$\begin{aligned}p^3: x_3 - x_2T'(x_0) - \frac{1}{2} x_1^2T''(x_0)=0 \nonumber \\\quad \vdots \end{aligned}$$
(19)

From (17), we have \(x_1 = T(x_0) + m\). To obtain the value of m, assume \(x_2=0\). Now from (18)

$$\begin{aligned} m = \frac{T(x_0)T'(x_0)}{1-T(x_0)}. \end{aligned}$$
(20)

Now, \(x_0,x_1,x_2,x_3,\ldots\) are obtained as follows. From (16), we have

$$\begin{aligned} x_0 = c \implies x_0 = a- \frac{f(a)}{f'(a)}-\frac{(f(a))^2f''(a)}{2(f'(a))^3}. \end{aligned}$$
(21)

From (17) and (20), we have

$$\begin{aligned} x_1 = T(x_0) + m = \frac{T(x_0)}{1-T'(x_0)}. \end{aligned}$$
(22)

From the assumption \(x_2=0\) and from (19), we get

$$\begin{aligned} x_3 = x_2T'(x_0) + \frac{1}{2} x_1^2T''(x_0) = \frac{T^2(x_0)T''(x_0)}{2(1-T'(x_0))^2}. \end{aligned}$$
(23)

From (6), (10) and (9), we have

$$\begin{aligned}T(x_0) = - \frac{G(x_0)}{f'(a)} = -\frac{f(x_0)}{f'(a)}, \end{aligned}$$
(24)
$$\begin{aligned}T'(x_0) = -\frac{G'(x_0)}{f'(a)} = -\frac{f'(x_0)-f'(a)}{f'(a)} = 1-\frac{f'(x_0)}{f'(a)}, \end{aligned}$$
(25)
$$\begin{aligned}T''(x_0) = -\frac{G''(x_0)}{f'(a)} = -\frac{f''(x_0)}{f'(a)}. \end{aligned}$$
(26)

The approximate solution is obtained as

$$\begin{aligned} x = \lim \limits _{i\rightarrow \infty } x_i = x_0 + x_1 + x_2 + \cdots + x_i. \end{aligned}$$
(27)

This formulation allows us to form the following iterative methods.

Algorithm 1

For \(i=0\), we have

$$\begin{aligned} x \approx x_0 = a- \frac{f(a)}{f'(a)}-\frac{(f(a))^2f''(a)}{2(f'(a))^3}. \end{aligned}$$

Hence, for a given \(x_0\), we have the following iterative formula to find the approximate solution \(x_{n+1}\).

$$\begin{aligned} x_{n+1} = x_n- \frac{f(x_n)}{f'(x_n)}-\frac{(f(x_n))^2f''(x_n)}{2(f'(x_n))^3}. \end{aligned}$$
(28)

Algorithm 2

For \(i=1\), we have

$$\begin{aligned} x \approx x_0 + x_1= a - \frac{f(a)}{f'(a)}-\frac{(f(a))^2f''(a)}{2(f'(a))^3} + \frac{T(x_0)}{1-T'(x_0)} \\~ = a - \frac{f(a)}{f'(a)}-\frac{(f(a))^2f''(a)}{2(f'(a))^3} - \frac{f(x_0)}{f'(x_0)} \end{aligned}$$

Hence, for a given \(x_0\), we have the following iterative schemes to find the approximate solution \(x_{n+1}\).

$$\begin{aligned} {\begin{matrix} y_n {}= x_n- \frac{f(x_n)}{f'(x_n)}-\frac{(f(x_n))^2f''(x_n)}{2(f'(x_n))^3}, \\ x_{n+1} {}= x_n - \frac{f(x_n)}{f'(x_n)}-\frac{(f(x_n))^2f''(x_n)}{2(f'(x_n))^3} - \frac{f(y_n)}{f'(y_n)}. \end{matrix}} \end{aligned}$$
(29)

Note: Since \(x_2=0\), we have the formula (29) for \(i=2\). i.e., \(x \approx x_0 + x_1 = x_0 + x_1 + x_2\).

Algorithm 3

For \(i=3\), we have

$$\begin{aligned} x \approx~ x_0 + x_1 + x_2 + x_3 \\~ = a - \frac{f(a)}{f'(a)}-\frac{(f(a))^2f''(a)}{2(f'(a))^3} + \frac{T(x_0)}{1-T'(x_0)} + \frac{T^2(x_0)T''(x_0)}{2(1-T'(x_0))^2}\\~ = a - \frac{f(a)}{f'(a)}-\frac{(f(a))^2f''(a)}{2(f'(a))^3} - \frac{f(x_0)}{f'(x_0)} - \frac{(f(x_0))^2f''(x_0)}{2f'(a)(f'(x_0))^2} \end{aligned}$$

Hence, for a given \(x_0\), we have the following iterative formula to find the approximate solution \(x_{n+1}\).

$$\begin{aligned} {\begin{matrix} y_n {}= x_n- \frac{f(x_n)}{f'(x_n)}-\frac{(f(x_n))^2f''(x_n)}{2(f'(x_n))^3}, \\ x_{n+1} {}= x_n - \frac{f(x_n)}{f'(x_n)}-\frac{(f(x_n))^2f''(x_n)}{2(f'(x_n))^3} - \frac{f(y_n)}{f'(y_n)} - \frac{(f(y_n))^2f''(y_n)}{2f'(x_n)(f'(y_n))^2}. \end{matrix}} \end{aligned}$$
(30)

Order of convergence

In this section, we show, in the following theorems, that the orders of converges of Algorithms 1, 2 and 3 are three, six and seven respectively. Let \(I \subset \mathbb {R}\) be an open interval. To prove this, we follow the proofs of [9, Theorem 5, Theorem 6].

Theorem 2

Let \(f:I \rightarrow \mathbb {R}\). Suppose \(\alpha \in I\) is a simple root of (1) and \(\theta\) is a sufficiently small neighborhood of \(\alpha\). Let \(f''(x)\) exist and \(f'(x) \ne 0\) in \(\theta\). Then the iterative formula (28) given in Algorithm 1 produces a sequence of iterations \(\{x_n:n=0,1,2,\ldots \}\) with order of convergence three.

Proof

Let

$$\begin{aligned} R(x) = x - \frac{f(x)}{f'(x)}-\frac{(f(x))^2f''(x)}{2(f'(x))^3}. \end{aligned}$$

Since \(\alpha\) is a root of f(x), hence \(f(\alpha ) = 0\). One can compute that

$$\begin{aligned}R(\alpha ) = \alpha , \\R'(\alpha ) = 0, \\R''(\alpha ) = 0, \\R'''(\alpha ) \ne 0. \end{aligned}$$

Hence the Algorithm 1 has third order convergence, by Theorem 1. \(\square\)

One can also verify that the order of convergence of Algorithm 1 as in the following example.

Example 1

Consider the following equation. It has a root \(\alpha =\sqrt{30}\). We show, as discussed in proof of Theorem 2, that the Algorithm 1 has third order convergence.

$$\begin{aligned} ~ f(x) = 30-x^2. \end{aligned}$$
(31)

Following Theorem 2, we have

$$\begin{aligned} R(x) =~ x - \frac{f(x)}{f'(x)}-\frac{(f(x))^2f''(x)}{2(f'(x))^3} \\ =~ \frac{3x^4+180x^2-900}{8x^3}, \\ R'(x) =~ \frac{3 (x^2-30)^2}{8x^4}, \\ R''(x) =~ \frac{45 (x^2-30)}{x^5}, \\ R'''(x) =~ -\frac{135 (x^2-50)}{x^6}. \end{aligned}$$

Now

$$\begin{aligned} R(\alpha ) =~ \sqrt{30} = \alpha , \\ R'(\alpha ) =~ 0, \\ R''(\alpha ) =~ 0, \\ R'''(\alpha ) =~ \frac{1}{10} \ne 0. \end{aligned}$$

Hence, by Theorem 2, the Algorithm 1 has third order convergence.

Theorem 3

Let \(f:I \rightarrow \mathbb {R}\). Suppose \(\alpha \in I\) is a simple root of (1) and \(\theta\) is a sufficiently small neighborhood of \(\alpha\). Let \(f''(x)\) exist and \(f'(x) \ne 0\) in \(\theta\). Then the iterative formula (29) given in Algorithm 2 produces a sequence of iterations \(\{x_n:n=0,1,2,\ldots \}\) with order of convergence six.

Proof

Let

$$\begin{aligned} R(x) = x - \frac{f(x)}{f'(x)}-\frac{(f(x))^2f''(x)}{2(f'(x))^3} - \frac{f\left( x - \frac{f(x)}{f'(x)}-\frac{(f(x))^2f''(x)}{2(f'(x))^3} \right) }{f'\left( x - \frac{f(x)}{f'(x)}-\frac{(f(x))^2f''(x)}{2(f'(x))^3} \right) }. \end{aligned}$$

Since \(\alpha\) is a root of f(x), hence \(f(\alpha ) = 0\). One can compute that

$$\begin{aligned}R(\alpha ) = \alpha , \\R'(\alpha ) = 0, \\R''(\alpha ) = 0, \\R^{(3)}(\alpha ) = 0, \\R^{(4)}(\alpha ) = 0, \\R^{(5)}(\alpha ) = 0, \\R^{(6)}(\alpha ) \ne 0. \end{aligned}$$

Hence the Algorithm 2 has sixth order convergence, by Theorem 1. \(\square\)

We can also verify the order of convergence of Algorithm 2 as in the following example.

Example 2

Consider the equation (31). Using Theorem 3, similar to Example 1, we have

$$\begin{aligned} R(x) =~ x - \frac{f(x)}{f'(x)}-\frac{(f(x))^2f''(x)}{2(f'(x))^3} - \frac{f\left( x - \frac{f(x)}{f'(x)}-\frac{(f(x))^2f''(x)}{2(f'(x))^3} \right) }{f'\left( x - \frac{f(x)}{f'(x)}-\frac{(f(x))^2f''(x)}{2(f'(x))^3} \right) } \\ =~ \frac{3x^8+1000x^6+9000x^4-108000x^2+270000}{16x^3(x^4+60x^2-300)}, \\ R'(x) =~ \frac{(3x^8-280x^6+9000x^4-108000x^2+270000)(x^2-30)^2}{16x^4(x^4+60x^2-300)^2}, \\ R''(x) =~ \left( \frac{5(x^2-30)}{2x^5(x^4+60x^2-300)^3}\right) \\~~~~~~~(41x^{12}-3180x^{10}+60300x^8+684000x^6 \\~~~~~~-26730000x^4+145800000x^2-243000000), \\ R^{(3)}(x) =~ \left( -\frac{15}{2x^6(x^4+60x^2-300)^4}\right) \\~~~~~~(41x^{18}-9810x^{16}+488400x^{14}-3348000x^{12} \\~~~~~~-162900000x^{10}-955800000x^8+104868000000 x^6 \\~~~~~~-884520000000 x^4+2988900000000 x^2-3645000000000), \\ R^{(4)}(x) =~ \left( \frac{30}{x^7(x^4+60x^2-300)^5}\right) \\~~~~~~ (41x^{22}-17175x^{20}+1308000x^{18}-15727500x^{16} \\~~~~~~-296100000x^{14}-14625900000x^{12}+284580000000x^{10} \\~~~~~~+8460450000000x^8-117733500000000x^6 \\~~~~~~+650632500000000x^4-1662120000000000x^2 \\~~~~~~+1640250000000000), \\ R^{(5)}(x) =~\left( -\frac{150}{x^8(x^4+60x^2-300)^6}\right) \\~~~~~~(41x^{26}-26505x^{24}+3009600x^{22}-63693000x^{20} \\~~~~~~-95310000x^{18}-63030150000x^{16}-34344000000x^{14} \\~~~~~~+51666660000000x^{12}+470472300000000x^{10} \\~~~~~~-12040285500000000x^8+100252080000000000x^6 \\~~~~~~-407438100000000000x^4 +833247000000000000x^2 \\~~~~~~~-688905000000000000), \\ R^{(6)}(x) =~ \left( \frac{900}{x^9(x^4+60x^2-300)^7} \right) \\~~~~~~(41x^{30}-37800x^{28}+6113100x^{26}-208782000x^{24} \\~~~~~~+1884330000x^{22}-208202400000x^{20}-2671893000000x^{18} \\~~~~~~+144949500000000x^{16}+5848094700000000x^{14} \\~~~~~~+9219420000000000x^{12}-963361350000000000x^{10} \\~~~~~~+12133913400000000000x^8-71049069000000000000x^6 \\~~~~~~+227797920000000000000x^4-387755100000000000000x^2 \\~~~~~~+275562000000000000000). \end{aligned}$$

Now, we can check that

$$\begin{aligned}R(\alpha ) = \sqrt{30} = \alpha , \\R'(\alpha ) = 0, \\R''(\alpha ) = 0, \\R^{(3)}(\alpha ) = 0, \\R^{(4)}(\alpha ) = 0, \\R^{(5)}(\alpha ) = 0, \\R^{(6)}(\alpha ) = \frac{\sqrt{30}}{300} \ne 0. \end{aligned}$$

Hence, by Theorem 3, the Algorithm 2 has sixth order convergence.

Theorem 4

Let \(f:I \rightarrow \mathbb {R}\). Suppose \(\alpha \in I\) is a simple root of (1) and \(\theta\) is a sufficiently small neighborhood of \(\alpha\). Let \(f''(x)\) exist and \(f'(x) \ne 0\) in \(\theta\). Then the iterative formula (30) given in Algorithm 3 produces a sequence of iterations \(\{x_n:n=0,1,2,\ldots \}\) with order of convergence seven.

Proof

Let

$$\begin{aligned} R(x) =~ x - \frac{f(x)}{f'(x)}-\frac{(f(x))^2f''(x)}{2(f'(x))^3} - \frac{f\left( x - \frac{f(x)}{f'(x)}-\frac{(f(x))^2f''(x)}{2(f'(x))^3} \right) }{f'\left( x - \frac{f(x)}{f'(x)}-\frac{(f(x))^2f''(x)}{2(f'(x))^3} \right) } \\~~~-\frac{\left( f\left( x - \frac{f(x)}{f'(x)}-\frac{(f(x))^2f''(x)}{2(f'(x))^3} \right) \right) ^2 f''\left( x - \frac{f(x)}{f'(x)}-\frac{(f(x))^2f''(x)}{2(f'(x))^3} \right) }{2f'(x)\left( f'\left( x - \frac{f(x)}{f'(x)}-\frac{(f(x))^2f''(x)}{2(f'(x))^3} \right) \right) ^2}. \end{aligned}$$

Since \(\alpha\) is a root of f(x), hence \(f(\alpha ) = 0\). One can compute that

$$\begin{aligned}R(\alpha ) = \alpha , \\R'(\alpha ) = 0, \\R''(\alpha ) = 0, \\R^{(3)}(\alpha ) = 0,\\R^{(4)}(\alpha ) = 0, \\R^{(5)}(\alpha ) = 0, \\R^{(6)}(\alpha ) = 0, \\R^{(7)}(\alpha ) \ne 0. \end{aligned}$$

Hence the Algorithm 3 has seventh order convergence, by Theorem 1. \(\square\)

Again, one can verify the order of convergence of Algorithm 3 using the following example.

Example 3

Consider the equation (31). Following Theorem 4, similar to Example 1 and Example 2, we have

$$\begin{aligned} R(x) =~ x - \frac{f(x)}{f'(x)}-\frac{(f(x))^2f''(x)}{2(f'(x))^3} - \frac{f\left( x - \frac{f(x)}{f'(x)}-\frac{(f(x))^2f''(x)}{2(f'(x))^3} \right) }{f'\left( x - \frac{f(x)}{f'(x)}-\frac{(f(x))^2f''(x)}{2(f'(x))^3} \right) } \\~~~-\frac{\left( f\left( x - \frac{f(x)}{f'(x)}-\frac{(f(x))^2f''(x)}{2(f'(x))^3} \right) \right) ^2 f''\left( x - \frac{f(x)}{f'(x)}-\frac{(f(x))^2f''(x)}{2(f'(x))^3} \right) }{2f'(x)\left( f'\left( x - \frac{f(x)}{f'(x)}-\frac{(f(x))^2f''(x)}{2(f'(x))^3} \right) \right) ^2} \\ =~ \left( \frac{1}{512x^7(x^4+60x^2-300)^2} \right) (87x^{16}+39440x^{14}+2046800x^{12}\\~~~~~~+9912000x^{10} -428220000x^8+3650400000x^6-19116000000x^4 \\~~~~~~+58320000000x^2-72900000000). \end{aligned}$$

Now, we can check that

$$\begin{aligned}R(\alpha ) = \sqrt{30} = \alpha , \\R'(\alpha ) = 0, \\R''(\alpha ) = 0, \\R^{(3)}(\alpha ) = 0, \\R^{(4)}(\alpha ) = 0, \\R^{(5)}(\alpha ) = 0, \\R^{(6)}(\alpha ) = 0, \\R^{(7)}(\alpha ) = \frac{7}{300} \ne 0. \end{aligned}$$

Hence, by Theorem 4, the Algorithm 3 has seventh order convergence.

Numerical example

This section presents several numerical examples to illustrate the proposed algorithms, and comparisons are made to confirm that the proposed algorithms give solution faster than existing methods.

Example 4

Consider a non-linear equation

$$\begin{aligned} x^2-e^x-3x+2=0. \end{aligned}$$
(32)

Suppose the initial approximation is \(x_0 = 2\) with tolerance error \(10^{-10}\) correct to ten decimal places. Following the proposed algorithms (in equations 28, 29 and 30), we have

$$\begin{aligned} x_0 =~2,\\ f(x) =~ x^2-e^x-3x+2~~\text {and}~~f(x_0) = -7.389056099, \\ f'(x) =~ 2x-e^x-3 ~~\text {and}~~f'(x_0) = -6.389056099,\\ f''(x) =2-e^x ~~\text {and}~~f''(x_0) =-5.389056099. \end{aligned}$$

Iteration-1 using Algorithm 1:

$$\begin{aligned} x_1 =~ x_0- \frac{f(x_0)}{f'(x_0)}-\frac{(f(x_0))^2f''(x_0)}{2(f'(x_0))^3} \\ =~ 0.2793895885. \end{aligned}$$

Iteration-2 using Algorithm 1:

$$\begin{aligned} x_1 =~0.2793895885,~~~~f(x_1) = -0.082432628, \\ f'(x_1) =~ -3.763543228, ~~f''(x_1) = 0.677677595. \end{aligned}$$

Now,

$$\begin{aligned} x_2 =~ x_1- \frac{f(x_1)}{f'(x_1)}-\frac{(f(x_1))^2f''(x_1)}{2(f'(x_1))^3} \\ =~ 0.2575298491. \end{aligned}$$

Iteration-3 using Algorithm 1:

$$\begin{aligned} x_2 =~0.2575298491,~~~~f(x_2) = 0.000001649, \\ f'(x_2) =~ -3.778670729, ~~~~f''(x_2) = 0.706269573. \end{aligned}$$

Now,

$$\begin{aligned} x_3 =~ x_2- \frac{f(x_2)}{f'(x_2)}-\frac{(f(x_2))^2f''(x_2)}{2(f'(x_2))^3} \\ =~ 0.2575302855. \end{aligned}$$

Similarly, the Iteration-4 using Algorithm 1 is \(x_4 = 0.2575302855\). One can observe that Iteration-3 and Iteration-4 are same up to ten decimal places and also the tolerance error is \(10^{-10}\). Hence the required approximate root of the given equation (32) is 0.2575302855.

Now, we compute the iterations using Algorithm 2 as follows.

Iteration-1 using Algorithm 2:

$$\begin{aligned} y_0= x_0- \frac{f(x_0)}{f'(x_0)}-\frac{(f(x_0))^2f''(x_0)}{2(f'(x_0))^3} = 0.2793895885, \\ x_{1}= x_0 - \frac{f(x_0)}{f'(x_0)}-\frac{(f(x_0))^2f''(x_0)}{2(f'(x_0))^3} - \frac{f(y_0)}{f'(y_0)} = 0.2574866574. \end{aligned}$$

Iteration-2 using Algorithm 2:

$$\begin{aligned} y_1= x_1- \frac{f(x_1)}{f'(x_1)}-\frac{(f(x_1))^2f''(x_1)}{2(f'(x_1))^3} = 0.2575302856, \\ x_{2}= x_1 - \frac{f(x_1)}{f'(x_1)}-\frac{(f(x_1))^2f''(x_1)}{2(f'(x_1))^3} - \frac{f(y_1)}{f'(y_1)} = 0.2575302853. \end{aligned}$$

Similarly, the Iteration-3 using Algorithm 2 is \(x_3 = 0.2575302853\). One can observe that Iteration-2 and Iteration-3 are same up to ten decimal places and also the tolerance error is \(10^{-10}\).

Now, the iterations using Algorithm 3 are as follows.

Iteration-1 using Algorithm 3:

$$\begin{aligned} y_0= x_0- \frac{f(x_0)}{f'(x_0)}-\frac{(f(x_0))^2f''(x_0)}{2(f'(x_0))^3} = 0.2793895885, \\ x_{1}= x_0 - \frac{f(x_0)}{f'(x_0)}-\frac{(f(x_0))^2f''(x_0)}{2(f'(x_0))^3} - \frac{f(y_0)}{f'(y_0)} -\frac{f(y_0)}{f'(y_0)} - \frac{(f(y_0))^2f''(y_0)}{2f'(x_0)(f'(y_0))^2} \\= 0.2574612148. \end{aligned}$$

Iteration-2 using Algorithm 3:

$$\begin{aligned} y_1= x_1- \frac{f(x_1)}{f'(x_1)}-\frac{(f(x_1))^2f''(x_1)}{2(f'(x_1))^3} = 0.2575302854, \\ x_{2}= x_1 - \frac{f(x_1)}{f'(x_1)}-\frac{(f(x_1))^2f''(x_1)}{2(f'(x_1))^3} - \frac{f(y_1)}{f'(y_1)} -\frac{f(y_1)}{f'(y_1)} - \frac{(f(y_1))^2f''(y_1)}{2f'(x_1)(f'(y_1))^2} \\= 0.2575302854. \end{aligned}$$

Example 5

Consider the following equations with corresponding initial approximations to compare results of the proposed three methods with other existing methods. We take tolerance error \(10^{-15}\) with correct to 15 decimal places.

(a):

\(\cos x - x =0\) with initial approximation \(x_0=1.7\),

(b):

\(xe^{-x} - 0.1 =0\) with initial approximation \(x_0=0.1\),

(c):

\(\sin ^2 x - x^2 + 1 =0\) with initial approximation \(x_0=-1\),

(d):

\(x-e^{\sin x}+1 =0\) with initial approximation \(x_0=4\),

(e):

\(x^3-10 =0\) with initial approximation \(x_0=1.5\).

Table 1 gives a comparison of iterations number with different methods. In the table, ER, NR, NM, A1, A2, A3 and DIV indicate Exact Root, Newton-Raphson method, Noor Method [2], Algorithms 1,  2 and  3 and diverges respectively.

Table 1 Comparing No. of iterations by different methods

From Table 1, it is clear that the numerical results show that the proposed methods are more efficient than other existing methods.

Mapleimplementation

In this section, we present implementation of the proposed Algorithms 1,  2 and  3 in Maple. Various maple implementations for differential and transcendental equations are available, see, for example [17,18,19,20,21,22,23,24,25,26,27, 35]. One can also implement the proposed algorithms in Microsoft Excel similar to the implementation of existing algorithms in [33, 34].

Pseudo code

Input: Given f(x); initial approximation x[0]; tolerance \(\epsilon\); correct to decimal places \(\delta\); maximum number of iterations n.

Output: Approximate solution

  1. I.

    for i from 0 to n do

    1. II.

      Set \(x[i+1]\) = formula (28), (29) or (30).

    2. III.

      if \(|x[i+1]-x[i]| < \epsilon\) and \(|f(x[i+1])| < \delta\) then break; Output \(x[i+1]\)

Maple code

We present the maple code of the proposed algorithms as follows, and sample computations presented in Section.

Algorithm 1 in Maple

figure a

Algorithm 2 in Maple

figure b

Algorithm 3 in Maple

figure c

Sample computations

Consider the following function for sample computations using the Maple implementation.

$$\begin{aligned} f(x) = \frac{1}{7} (30 - x^2), \end{aligned}$$

with initial approximation \(x[0]=3.5\), tolerance \(\epsilon = 10^{-5}\), correct to decimal places \(\delta =10^{-10}\) (i.e., up to 10 decimal places);, and maximum number of iterations \(n=10\).

Algorithm 1 sample computations using Maple

figure d
$$\begin{aligned}1.10^{-10} \\1.10^{-10} \\x \rightarrow \frac{30}{7} - \frac{1}{7}x^2 \\x \rightarrow - \frac{2}{7}x \\x \rightarrow - \frac{2}{7} \\3.5 \\10 \\5.117164723 \\ \texttt {> Iteration No: 1 = }5.1171647230 \\ \texttt {f(x) = 2.53571} ~~~~~~~~\\5.476318565 \\ \texttt {> Iteration No: 2 = }5.4763185650 \\ \texttt {f(x) = 0.544946} ~~~~~~\\5.477225575 \\ \texttt {> Iteration No: 3 = }5.4772255750 \\ \texttt {f(x) = 0.00141928} ~~~\\5.477225575 \end{aligned}$$

Algorithm 2 sample computations using Maple

figure e
$$\begin{aligned}1.10^{-10} \\1.10^{-10} \\x \rightarrow \frac{30}{7} - \frac{1}{7}x^2 \\x \rightarrow - \frac{2}{7}x \\x \rightarrow - \frac{2}{7} \\3.5 \\10\\5.117164723 \\5.489893119 \\ \texttt {> Iteration No: 1 = }5.4898931190 \\ \texttt {f(x) = 2.53571} ~~~~~~~~\\5.477225609 \\5.477225575 \\ \texttt {> Iteration No: 2 = }5.4772255750 \\ \texttt {f(x) = -0.0198466} ~~~\\5.477225575 \\5.477225575 \end{aligned}$$

Similarly, one can apply the Algorithm 3 using Maple code.

Conclusion

In this paper, we presented three iterative methods of order three, six and seven respectively for solving non-linear equations. With the help of modified homotopy perturbation technique, we obtained coupled system of equations which gives solution faster than existing methods. The analysis of convergence of the proposed iterative methods are discussed with example for each proposed method. Maple implementations of the proposed methods are discussed with sample sample computations. Numerical examples are presented to illustrate and validation of the proposed methods.

Limitations

The proposed algorithms are implemented in Maple only. However, we can also implement these algorithms in Mathematica, SCILab, Matlab, Microsoft Excel etc.

Availability of data and materials

The datasets generated and analyzed during the current study are presented in this manuscript.

Abbreviations

HPT:

Homotopy Perturbation Technique

References

  1. Naseem A, Rehman MA, Abdeljawad T. Real-world applications of a newly designed root-finding algorithm and its polynomiography. IEEE Access. 2021;9:160868–77.

    Article  Google Scholar 

  2. Chun C. Iterative methods improving Newton’s method by the decomposition method. Comput Math Appl. 2005;50:1559–68.

    Article  Google Scholar 

  3. Babolian E, Biazar J. On the order of convergence of adomain method. Appl Math Comput. 2002;130:383–7.

    Google Scholar 

  4. He JH. A new iterative method for solving algebraic equations. Appl Math Comput. 2005;135:81–4.

    Article  Google Scholar 

  5. Noor MA, Noor KI, Khan WA, Ahmad F. On iterative methods for nonlinear equations. Appl Math Comput. 2006;183:128–33.

    Article  Google Scholar 

  6. Noor MA, Ahmad F. Numerical comparison of iterative methods for solving nonlinear equations. Appl Math Comput. 2006;180:167–72.

    Google Scholar 

  7. Noor MA, Noor KI. Three-step iterative methods for nonlinear equations. Appl Math Comput. 2006;183:322–7.

    Article  Google Scholar 

  8. Noor MA. Iterative methods for nonlinear equations using homotopy perturbation technique. Appl Math Inf Sci. 2010;4(2):227–35.

    Google Scholar 

  9. Saqib M, Iqbal M, Ali S, Ismaeel T. New fourth and fifth-order iterative methods for solving nonlinear equations. Appl Math. 2015;6:1220–7.

    Article  Google Scholar 

  10. Saied L. A new modification of False-Position method based on homotopy analysis method. Appl Math Mech. 2008;29(2):223–8.

    Article  Google Scholar 

  11. Sagraloff M. Computing real roots of real polynomials. J Symb Comput. 2013. https://doi.org/10.1016/j.jsc.2015.03.004.

    Article  Google Scholar 

  12. Hussain S, Srivastav VK, Thota S. Assessment of interpolation methods for solving the real life problem. Int J Math Sci Appl. 2015;5(1):91–5.

    Google Scholar 

  13. Thota S, Gemechu T, Shanmugasundaram P. New algorithms for computing non-linear equations using exponential series. Palestine J Math. 2021;10(1):128–34.

    Google Scholar 

  14. Thota S, Srivastav VK. Quadratically convergent algorithm for computing real root of non-linear transcendental equations. BMC Res Notes. 2018;11:909.

    Article  Google Scholar 

  15. Thota S, Srivastav VK. Interpolation based hybrid algorithm for computing real root of non-linear transcendental functions. IJMCR. 2014;2(11):729–35.

    Google Scholar 

  16. Thota S. A new root-finding algorithm using exponential series. Ural Math J. 2019;5(1):83–90.

    Article  Google Scholar 

  17. Thota S, Kumar SD. Maple implementation of a reduction algorithm for differential-algebraic systems with variable coefficients, International Conference on Recent Trends in Engineering & Technology and Quest for Sustainable Development, 2019, 1–8.

  18. Thota S, Kumar SD. Maple implementation of symbolic methods for initial value problems. Res Resurgence. 2018;1(1):21–39 (ISBN: 978-93-83861-12-5).

    Google Scholar 

  19. Thota S. Initial value problems for system of differential-algebraic equations in Maple. BMC Res Notes. 2018;11:651.

    Article  Google Scholar 

  20. Thota S. A symbolic algorithm for polynomial interpolation with stieltjes conditions in maple. Proc Inst Appl Math. 2019;8(2):112–20.

    Google Scholar 

  21. Thota S. Maple implementation of a symbolic method for fully inhomogeneous boundary value problems. Int J Comput Sci Eng. 2021;3:1–5.

    Google Scholar 

  22. Thota S, Kumar SD. A new reduction algorithm for differential-algebraic systems with power series coefficients. Inf Sci Lett. 2021;10(1):59–66.

    Article  Google Scholar 

  23. Thota S. Maple implementation for reducing differential-algebraic systems, Conference on Mathematics for the Advancement of Science, Technology and Education, March 5–6, 2021, at Ethiopian Mathematics Professionals Association (EMPA), Department of Mathematics, College of Natural and Computational Sciences, Addis Ababa University, Ethiopia.

  24. Thota S. Implementation of a symbolic method for fully inhomogeneous boundary value problems, 2021 International Conference on Mathematics and Computers in Science and Engineering (MACISE 2021), January 18–20, 2021, Madrid, Spain.

  25. Thota S. On a third order iterative algorithm for solving non-linear equations with maple implementation, National E-Conference on Interdisciplinary Research in Science and Technology, May 30–31, 2020, Amiruddaula Islmia Degree College, Locknow, India.

  26. Thota S, Kumar SD. Maple implementation of a reduction algorithm for differential-algebraic systems with variable coefficients, International Conference on Recent Trends in Engineering & Technology and Quest for Sustainable Development, May 13–14, 2019, Institute of Technology, Ambo University, Ethiopia.

  27. Thota S, Kumar SD. Maple package of initial value problem for system of differential-algebraic equations. National Seminar on Applications of Scientific and Statistical Software in Research, 30–31 March. The School of Sciences. Allahabad, India: Uttar Pradesh Rajarshi Tandon Open University; 2017.

  28. Gemechu T, Thota S. On new root finding algorithms for solving nonlinear transcendental equations. Int J Chem Math Phys. 2020;4(2):18–24.

    Article  Google Scholar 

  29. Thota S. Solution of generalized abel?s integral equations by homotopy perturbation method with adaptation in laplace transformation. Sohag J Math. 2022;9(2):29–35.

    Article  Google Scholar 

  30. Thota S, Srivastav VK. An algorithm to compute real root of transcendental equations using hyperbolic tangent function. Int J Open Problems Compt Math. 2021;14(2):1–14.

    Google Scholar 

  31. Thota S. A numerical algorithm to find a root of non-linear equations using householders method. Int J Adv Appl Sci. 2021;10(2):141–8.

    Google Scholar 

  32. Thota S. Solution of generalized abel’s integral equations by homotopy perturbation method with adaptation in laplace transformation. Sohag J Math. 2022;9(2):29–35.

    Article  Google Scholar 

  33. Thota S, Ayoade AA. On solving transcendental equations using various root finding algorithms with microsoft excel, 1st edition, Notion Press, 2022, ISBN-13: 979-8886844238.

  34. Thota S. Microsoft Excel Implementation of Numerical Algorithms for Nonlinear algebraic or Transcendental Equations, 5th International Conference on Statistics, Mathematical Modelling and Analysis (SMMA 2022), May 27-29, 2022 in Xi’an, China.

  35. Thota S. An introduction to maple, five days national faculty development program on innovative tools for solving research problems, April 25–29. India: Chitkara University; 2022.

    Google Scholar 

  36. Thota S. A new hybrid halley-false position type root finding algorithm to solve transcendental equations, Istanbul International Modern Scientific Research Congress-III, 06-08. Istanbul Gedik University. Turkey: Istanbul; 2022.

  37. Parveen T, Singh S, Thota S, Srivastav VK. A new hydride root-finding algorithm for transcendental equations using bisection, regula-falsi and newton-raphson methods, National Conference on Sustainable & Recent Innovation in Science and Engineering (SUNRISE-19), 2019. (ISBN No. 978-93-5391-715-9).

  38. Srivastav VK, Thota S, Kumar M. A new trigonometrical algorithm for computing real root of non-linear transcendental equations. Int J Appl Compt Math. 2019;5:44.

    Article  Google Scholar 

Download references

Acknowledgements

The authors are thankful to the reviewers and editor for providing valuable inputs to improve the quality and present format of manuscript.

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

ST involved in creation of the proposed algorithms for solving non-linear equations using the modified HPT, the convergence analysis, and Maple implementation. PS is involved in suggestion and verification of the numerical examples in the present paper. Both authors read and approved the final manuscript.

Authors' information

Prof. Srinivasarao Thota completed his M.Sc. in Mathematics from Indian Institute of Technology (IIT) Madras, India and Ph.D. in Mathematics from Motilal Nehru National institute of Technology (NIT) Allahabad, India. Prof. Thota’s area of research interests are Computer Algebra (symbolic methods for differential equations), Numerical Analysis (root finding algorithms), Mathematical Modeling (ecology). He has published more than 40 research papers in various international journals and presented his research work at several international conferences as oral presenter and invited/keynote/guest speaker in different countries. Presently working at department of mathematics, SR University, Warangal, India.

Prof. P. Shanmugasundaram completed M.Sc. in Mathematics from Sri Vasavi College, M.Phill in Mathematics from Madurai Kamraj University, Madurai, India and Ph.D. in Mathematics Anna University, Chennai, India. He has published more than 25 research papers in various international journals. Presently, working at Department of Mathematics, College of Natural & Computational sciences, Mizan–Tepi University, Ethiopia.

Corresponding author

Correspondence to Srinivasarao Thota.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Consent for publication

Not applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Thota, S., Shanmugasundaram, P. On new sixth and seventh order iterative methods for solving non-linear equations using homotopy perturbation technique. BMC Res Notes 15, 267 (2022). https://doi.org/10.1186/s13104-022-06154-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13104-022-06154-5

Keywords

Mathematics subject classification