Steffensen’s Method For Finding Roots of Nonlinear Equations

by admin in , , on June 15, 2019

Steffensen’s method is a root-finding technique similar to Newton’s method, named after Johan Frederik Steffensen. Steffensen’s method also achieves quadratic convergence, but without using derivatives as Newton’s method does.

Example On Using This Method

Input

f [email protected](x) x.^3 - 8;                   % Equation we interest to solve
p = rand(1,1);                      % Initial values [p0]
N = 1e6;                            % Max. number of iterations
tol = 1e-6;                         % Tollerance
[X,Err,ii] = Steffensen(f,p,tol,N); % Calling Steffensen's Function

Output


Root X = 2 ,Fx = 1.5289e-11 ,# Iterations = 16


Contents

  • Simple description
  • Advantages and drawbacks
  • Derivation using Aitken’s delta-squared process
  • Implementation in Python
  • Generalization
  • Notes
  • References

Simple Description

The simplest form of the formula for Steffensen’s method occurs when it is used to find the zeros, or roots, of a function f ; that is: to find the value x* that satisfies f(x*) = 0 . Near the solution x* , the function f is supposed to approximately satisfy -1 < f'(x*) < 0 ; this condition makes adequate as a correction-function for x for finding its own solution, although it is not required to work efficiently. For some functions, Steffensen’s method can work even if this condition is not met, but in such a case, the starting value x_(0) must be very close to the actual solution x* , and convergence to the solution may be slow.

Given an adequate starting value x_(0) , a sequence of values x_(0), x_(1), x_(2),… ,x_(n),… can be generated using the formula below. When it works, each value in the sequence is much closer to the solution x* than the prior value. The value x_(n) from the current step generates the value x_(n+1) for the next step, via this formula:

for n = 0, 1, 2, 3, … , where the slope function g(x_(n)) is a composite of the original function f given by the following formula:

or equivalently

where h = f(x_(n)).

The function g is the average value for the slope of the function between the last sequence point (x,y) = (x_(n), f(x_(n))) and the auxiliary point (x,y) = (x_(n) + h, f(x_(n) + h)), with the step h = f(x_(n)). It is also called the first-order divided difference of f between those two points.

It is only for the purpose of finding h for this auxiliary point that the value of the function f must be an adequate correction to get closer to its own solution, and for that reason fulfill the requirement that  -1 < f'(x*) < 0. For all other parts of the calculation, Steffensen’s method only requires the function f to be continuous and to actually have a nearby solution. Several modest modifications of the step h in the slope calculation g exist to accommodate functions f that do not quite meet the requirement.

Advantages and Drawbacks

The main advantage of Steffensen’s method is that it has quadratic convergence[1] like Newton’s method – that is, both methods find roots to an equation f just as ‘quickly’. In this case quickly means that for both methods, the number of correct digits in the answer doubles with each step. But the formula for Newton’s method requires evaluation of the function’s derivative f’ as well as the function f, while Steffensen’s method only requires f itself. This is important when the derivative is not easily or efficiently available.

The price for the quick convergence is the double function evaluation: Both f(x_(n)) and f(x_(n)+h)) must be calculated, which might be time-consuming if f is a complicated function. For comparison, the secant method needs only one function evaluation per step. The secant method increases the number of correct digits by “only” a factor of roughly 1.6 per step, but one can do twice as many steps of the secant method within a given time. Since the secant method can carry out twice as many steps in the same time as Steffensen’s method,[a] when both algorithms succeed, the secant method converges faster than Steffensen’s method in actual practice: The secant method achieves a factor of about (1.6)2 ≈ 2.6 times as many digits for every two steps (two function evaluations), compared to Steffensen’s factor of 2 for every one step (two function evaluations).

Similar to most other iterative root-finding algorithms, the crucial weakness in Steffensen’s method is the choice of the starting value x_(0) . If the value of x_(0) is not ‘close enough’ to the actual solution x* , the method may fail and the sequence of values x_(0),x_(1),x_(2),x_(3),… may either flip-flop between two extremes, or diverge to infinity (possibly both!).

Derivation Using Aitken’s Delta-Squared Process

The version of Steffensen’s method implemented in the MATLAB code shown below can be found using the Aitken’s delta-squared process for accelerating convergence of a sequence. To compare the following formulae to the formulae in the section above, notice that x_(n) = p – p_(n). This method assumes starting with a linearly convergent sequence and increases the rate of convergence of that sequence. If the signs of p_(n), p_(n+1), p_(n+2) agree and p_(n) is ‘sufficiently close’ to the desired limit of the sequence p, we can assume the following:

then

so

and hence

 .Solving for the desired limit of the sequence p gives:

which results in the more rapidly convergent sequence:

Implementation in Python

Here is the source for an implementation of Steffensen’s Method in Python.

def g(f, x):
    '''
    First-order divided difference function
    parameter:
    f(callable): Function input to g
    x(float): Point at which to evaluate g
    '''
    return lambda x: f(x + f(x))/f(x) - 1

def steff(f, x):
    '''
    Steffenson Algorithm for finding roots
    This recursive generator yields the x_n+1 value first then, when the generator iterates, it yields x_n+2 from the next level of recursion.
    
    parameters:
    f(callable): Functio whose root we are searching for
    x(float): Starting value upon first call, each level n that the function recurses x is x_n
    '''

    if(g(f, x)(x)!=0):
        yield x - f(x)/g(f, x)(x)#first give x_n+1
        yield from steff(f, x - f(x)/g(f, x)(x)) #then give new iterator

Generalization

Steffensen’s method can also be used to find an input x = x* for a different kind of function F that produces output the same as its input: x* = F(x*) for the special value x* . Solutions like x* are called fixed points. Many such functions can be used to find their own solutions by repeatedly recycling the result back as input, but the rate of convergence can be slow, or the function can fail to converge at all, depending on the individual function. Steffensen’s method accelerates this convergence, to make it quadratic.

This method for finding fixed points of a real-valued function has been generalised for functions F: X —> X on a Banach space X . The generalised method assumes that afamily of bounded linear operators L(u,v) : u,v in X) associated with and v can be found to satisfy the condition[2]

In the simple form given in the section above, the function f simply takes in and produces real numbers. There, the function g is a divided difference. In the generalized form here, the operator L is the analogue of a divided difference for use in the Banach space. The operator L is equivalent to a matrix whose entries are all functions of vector arguments u and v

Steffensen’s method is then very similar to the Newton’s method, except that it uses the divided difference L(F(x),x) instead of the derivative F'(x) . It is thus defined by

for n=1, 2, 3, … , and whereis the identity operator.

If the operator L satisfies

for some constant K , then the method converges quadratically to a fixed point of F if the initial approximation x_(0) is ‘sufficiently close’ to the desired solution x* , that satisfies x* = F(x*) .

References

  1. Dahlquist, Germund; Björck, Åke (1974). Numerical Methods. Translated by Anderson, Ned. Englewood Cliffs, NJ: Prentice Hall. pp. 230–231.
  2. Johnson, L.W.; Scholz, D.R. (June 1968). “On Steffensen’s method”. SIAM Journal on Numerical Analysis5 (2): 296–302. doi:10.1137/0705026. JSTOR 2949443.

2 Sales

Share Now!

Release Information

  • Price
    :

    $3.99

  • Released
    :

    June 15, 2019

  • Last Updated
    :

    June 15, 2019

Share Your Valuable Opinions