7 iterative Fast Methods For Calculating Square Root

by admin in , , on June 14, 2019

There are several square root algorithms or methods of computing the principal square root of a non-negative real number. For the square roots of a negative or complex number. Finding Sqrt(S) is the same as solving the equation f(x) = x^(2) – S = 0, for a positive x. Therefore, any general numerical root-finding algorithm can be used. Newton’s method, for example, reduces in this case to the so-called Babylonian method:

Example On Using This Code:

Input

m = 110;                              % Number you want to Square it 
% [SQRT] = Babylonian_Newton_ISQRT(m) % Calling Babylonian Newton Function
% [SQRT] = Bakhshali_ISQRT(m)       % Calling Bakhshali Function
% [SQRT] = Exponential_ISQRT(m)     % Calling Exponential identity Function
% [SQRT] = Two_Variable_ISQRT(m)% Calling Two-Variable Iterative Function]0,3[
% [SQRT] = Halley_ISQRT(m)      % Calling Halley's Function
% [SQRT] = Goldschmidt1_ISQRT(m)% Calling Goldschmidt’s Function - First Form
[SQRT] = Goldschmidt2_ISQRT(m)  % Calling Goldschmidt’s Function - Second Form

Output

The output is always the value of the squared root

Babylonian Method

Graph charting the use of the Babylonian method for approximating a square root of 100 (±10) using starting values x0 = 50, x0 = 1, and x0 = −5. Note that a positive starting value yields the positive root, and a negative starting value the negative root.

Perhaps the first algorithm used for approximating Sqrt(S) is known as the Babylonian method, despite there being no direct evidence, beyond informed conjecture, that the eponymous Babylonian mathematicians employed exactly this method.[1] The method is also known as Heron’s method, after the first-century Greek mathematician Hero of Alexandriawho gave the first explicit description of the method in his AD 60 work Metrica.[2] The basic idea is that if x is an overestimate to the square root of a non-negative real number S then S/x will be an underestimate, or vice versa, and so the average of these two numbers may reasonably be expected to provide a better approximation (though the formal proof of that assertion depends on the inequality of arithmetic and geometric means that shows this average is always an overestimate of the square root, as noted in the article on square roots, thus assuring convergence).

More precisely, if x is our initial guess of Sqrt(S) and e is the error in our estimate such that S = (xe)2, then we can expand the binomial and solve for

since e << x. Therefore, we can compensate for the error and update our old estimate as

Since the computed error was not exact, this becomes our next best guess. The process of updating is iterated until desired accuracy is obtained. This is a quadratically convergent algorithm, which means that the number of correct digits of the approximation roughly doubles with each iteration. It proceeds as follows:

  1. Begin with an arbitrary positive starting value x0 (the closer to the actual square root of S, the better).
  2. Let xn + 1 be the average of xn and S/xn (using the arithmetic mean to approximate the geometric mean).
  3. Repeat step 2 until the desired accuracy is achieved.

It can also be represented as:

This algorithm works equally well in the p-adic numbers, but cannot be used to identify real square roots with p-adic square roots; one can, for example, construct a sequence of rational numbers by this method that converges to +3 in the reals, but to −3 in the 2-adics.

It can be derived from Newton’s method (which it predates by 16 centuries):

,

Example

To calculate S, where S = 125348, to six significant figures, use the rough estimation method above to get

Therefore, √125348 ≈ 354.045.

Convergence

Suppose that x0 > 0 and S > 0. Then for any natural number nxn > 0. Let the relative error in xn be defined by

and thus

Then it can be shown that

And thus that

and consequently that convergence is assured, and quadratic.

Bakhshali Method

This method for finding an approximation to a square root was described in an ancient Indian mathematical manuscript called the Bakhshali manuscript. It is equivalent to two iterations of the Babylonian method beginning with x0. Thus, the algorithm is quartically convergent, which means that the number of correct digits of the approximation roughly quadruples with each iteration.[3] The original presentation, using modern notation, is as follows: To calculate sqrt(S), let x02 be the initial approximation to S. Then, successively iterate as:

Written explicitly, it becomes

Let x0 = N be an integer which is the nearest perfect square to S. Also, let the difference d = S – N2, then the first iteration can be written as:

This gives a rational approximation to the square root.

Example

Using the same example as given in Babylonian method, let S=125348. Then, the first iterations gives

Likewise the second iteration gives

Exponential identity

Pocket calculators typically implement good routines to compute the exponential function and the natural logarithm, and then compute the square root of S using the identity found using the properties of logarithms ln (x^(n)) = n.ln(x) and exponentials e^(ln(x)) = x:

The denominator in the fraction corresponds to the nth root. In the case above the denominator is 2, hence the equation specifies that the square root is to be found. The same identity is used when computing square roots with logarithm tables or slide rules.

A Two-Variable iterative Method

This method is applicable for finding the square root of 0 < S < 3 and converges best for S =~ 1. This, however, is no real limitation for a computer based calculation, as in base 2 floating point and fixed point representations, it is trivial to multiply S by an integer power of 4, and therefore Sqrt(S) by the corresponding power of 2, by changing the exponent or by shifting, respectively. Therefore, S can be moved to the range 0.5 =< S < 2. Moreover, the following method does not employ general divisions, but only additions, subtractions, multiplications, and divisions by powers of two, which are again trivial to implement. A disadvantage of the method is that numerical errors accumulate, in contrast to single variable iterative methods such as the Babylonian one. The initialization step of this method is

while the iterative steps read

Then,a_(n) –> Sqrt(S) (while c_(n)–> 0).

Note that the convergence of c_(n), and therefore also of a_(n), is quadratic.

The proof of the method is rather easy. First, rewrite the iterative definition of c_(n) as.

Then it is straightforward to prove by induction that

and therefore the convergence of a_(n) to the desired result Sqrt(S) is ensured by the convergence of c_(n) to 0, which in turn follows from -1 < c_(0) < 2.

This method was developed around 1950 by M. V. Wilkes, D. J. Wheeler and S. Gill[7] for use on EDSAC, one of the first electronic computers.[8] The method was later generalized, allowing the computation of non-square roots.[9]

Iterative Methods For Reciprocal Square Roots

The following are iterative methods for finding the reciprocal square root of S which is 1 / Sqrt(S). Once it has been found, find Sqrt(S) by simple multiplication: Sqrt(S) = S . (1 / Sqrt(S)). These iterations involve only multiplication, and not division. They are therefore faster than the Babylonian method. However, they are not stable. If the initial value is not close to the reciprocal square root, the iterations will diverge away from it rather than converge to it. It can therefore be advantageous to perform an iteration of the Babylonian method on a rough estimate before starting to apply these methods.

  • Applying Newton’s method to the equation (1 / x^(2)) – S = 0 produces a method that converges quadratically using three multiplications per step:
  • Another iteration is obtained by Halley’s method, which is the Householder’s method of order two. This converges cubically, but involves four multiplications per iteration:
, and
.

Goldschmidt’s Algorithm

Some computers use Goldschmidt’s algorithm to simultaneously calculate Sqrt(S) and 1 / Sqrt(S). Goldschmidt’s algorithm finds Sqrt(S) faster than Newton-Raphson iteration on a computer with a fused multiply–add instruction and either a pipelined floating point unit or two independent floating-point units.[10]

The first way of writing Goldschmidt’s algorithm begins

.                                                           (typically using a table lookup)

and iterates

until b_(i) is sufficiently close to 1, or a fixed number of iterations. The iterations converge to

, and

.

Note that it is possible to omit either x_(n) and y_(n) from the computation, and if both are desired then x_(n) = S.y_(n) may be used at the end rather than computing it through in each iteration.

A second form, using fused multiply-add operations, begins

(typically using a table lookup)

and iterates

until r_(i) is sufficiently close to 0, or a fixed number of iterations. This converges to

, and

References

  1. Fowler, David; Robson, Eleanor (1998). “Square Root Approximations in Old Babylonian Mathematics: YBC 7289 in Context”Historia Mathematica25 (25): 376. doi:10.1006/hmat.1998.2209. Retrieved 14 October 2018.
  2. Heath, Thomas (1921). A History of Greek Mathematics, Vol. 2. Oxford: Clarendon Press. pp. 323–324.
  3. Bailey, David; Borwein, Jonathan (2012). “Ancient Indian Square Roots: An Exercise in Forensic Paleo-Mathematics” (PDF)American Mathematical Monthly119 (8). pp. 646–657. Retrieved 2017-09-14.
  4. Fast integer square root by Mr. Woo’s abacus algorithm (archived)
  5. Integer Square Root function
  6. Sri Bharati Krisna Tirthaji (2008) [1965]. Vedic Mathematics or Sixteen Simple Mathematical Formulae from the Vedas. Motilal Banarsidass. ISBN 978-8120801639.
  7. M. V. Wilkes, D. J. Wheeler and S. Gill, “The Preparation of Programs for an Electronic Digital Computer”, Addison-Wesley, 1951.
  8. M. Campbell-Kelly, “Origin of Computing”, Scientific American, September 2009.
  9. J. C. Gower, “A Note on an Iterative Method for Root Extraction”, The Computer Journal 1(3):142–143, 1958.
  10. Markstein, Peter (November 2004). Software Division and Square Root Using Goldschmidt’s Algorithms (PDF)6th Conference on Real Numbers and Computers. Dagstuhl, Germany. CiteSeerX 10.1.1.85.9648.
  11. Meher, Pramod Kumar; Valls, Javier; Juang, Tso-Bing; Sridharan, K.; Maharatna, Koushik (2008-08-22). “50 Years of CORDIC: Algorithms, Architectures and Applications” (PDF)IEEE Transactions on Circuits & Systems-I: Regular Papers(published 2009-09-09). 56 (9): 1893–1907. doi:10.1109/TCSI.2009.2025803. Retrieved 2016-01-03.
  12. Beceanu, Marius. “Period of the Continued Fraction of sqrt(n)” (PDF). Theorem 2.3. Archived from the original (PDF) on 21 December 2015. Retrieved 21 December2015.
  13. Gliga, Alexandra Ioana (March 17, 2006). On continued fractions of the square root of prime numbers (PDF). Corollary 3.3.
  14. Fast Inverse Square Root by Chris Lomont
  15. “High-Speed Double-Precision Computation of Reciprocal, Division, Square Root and Inverse Square Root” by José-Alejandro Piñeiro and Javier Díaz Bruguera 2002 (abstract)
  16. Abramowitz, Miltonn; Stegun, Irene A. (1964). Handbook of mathematical functions with formulas, graphs, and mathematical tables. Courier Dover Publications. p. 17. ISBN 978-0-486-61272-0.Section 3.7.26, p. 17
  17. Cooke, Roger (2008). Classical algebra: its nature, origins, and uses. John Wiley and Sons. p. 59. ISBN 978-0-470-25952-8.Extract: page 59

External Links

 

0 Sale

Share Now!

Release Information

  • Price
    :

    $6.99

  • Released
    :

    June 14, 2019

  • Last Updated
    :

    June 14, 2019

Share Your Valuable Opinions