
Illinois Algorithm, Anderson – Björk Algorithm and False Position Method
by admin in Math, Statistics, and Optimization , MATLAB Family , Roots of Equation on June 14, 2019Illinois Algorithm and Anderson – Björk Algorithm are considered as a False Position method modifications, where they gives fast and more accurate answers for linear and nonlinear equations. In this code you have six methods to choose from and calculate the answer of given equation as descriped in the following example:
Example On Using This Code
Input
f =@(c) (68.1*9.81./c).*(1-exp(-c.*10./68.1))-40; % The Eq we want to solve N = 1000; % Number of Iterations err = 1e-9; % Result Accuracy c = 0.1:0.1:50; % The interval of the equation parameter c xl = c(1); % First interval value a xu = c(end); % Final interval value b Method = 6; % 1:False Position, % 2: Illinois Algorithm (Type1), % 3: Illinois Algorithm (Type2), % 4: Anderson - Björk Based False Position, % 5: Anderson - Björk Based Illinois (Type1), % 6: Anderson - Björk Based Illinois (Type2).
Output
The Root is: 14.8011 ,with accuracy: 7.7325e-12 ,No Iterations: 7
![]()
![]()
The illinois Algorithm
The Illinois algorithm halves the y-value of the retained end point in the next estimate computation when the new y-value (that is, f (ck)) has the same sign as the previous one (f (ck − 1)), meaning that the end point of the previous step will be retained. Hence:
or
down-weighting one of the endpoint values to force the next ck to occur on that side of the function.[10] The factor ½ used above looks arbitrary, but it guarantees superlinear convergence (asymptotically, the algorithm will perform two regular steps after any modified step, and has order of convergence 1.442). There are other ways to pick the rescaling which give even better superlinear convergence rates.[11]
The above adjustment to regula falsi is called the Illinois algorithm by some scholars.[10][12] Ford (1995) summarizes and analyzes this and other similar superlinear variants of the method of false position.[11]
Anderson − Björk Algorithm
Suppose that in the k-th iteration the bracketing interval is [ak, bk] and that the functional value of the new calculated estimate ck has the same sign as f (bk). In this case, the new bracketing interval [ak + 1, bk + 1] = [ak, ck] and the left-hand endpoint has been retained. (So far, that’s the same as ordinary Regula Falsi and the Illinois algorithm.)
But, whereas the Illinois algorithm would multiply f (ak) by 1/2, Anderson − Björk algorithm multiplies it by m, where m has one of the two following values:
if that value of m is positive, otherwise, let
.
For simple roots, Anderson-Björk was the clear winner in Galdino’s numerical tests.[13][14] For multiple roots, no method was much faster than bisection. In fact, the only methods that were as fast as bisection were three new methods introduced by Galdino. But even they were only a little faster than bisection.
Practical Considerations
When solving one equation, or just a few, using a computer, the bisection method is an adequate choice. Although bisection isn’t as fast as the other methods—when they’re at their best and don’t have a problem—bisection nevertheless is guaranteed to converge at a useful rate, roughly halving the error with each iteration – gaining roughly a decimal place of accuracy with every 3 iterations.
For manual calculation, by calculator, one tends to want to use faster methods, and they usually, but not always, converge faster than bisection. But a computer, even using bisection, will solve an equation, to the desired accuracy, so rapidly that there’s no need to try to save time by using a less reliable method—and every method is less reliable than bisection.
An exception would be if the computer program had to solve equations very many times during its run. Then the time saved by the faster methods could be significant.
Then, a program could start with Newton’s method, and, if Newton’s isn’t converging, switch to regula falsi, maybe in one of its improved versions, such as the Illinois or Anderson-Bjőrk versions. Or, if even that isn’t converging as well as bisection would, switch to bisection, which always converges at a useful, if not spectacular, rate.
When the change in y has become very small, and x is also changing very little, then Newton’s method most likely will not run into trouble, and will converge. So, under those favorable conditions, one could switch to Newton’s method if one wanted the error to be very small and wanted very fast convergence.
Example Code
This example program, written in the C programming language, includes the Illinois Algorithm. To find the positive number x where cos(x) = x3, the equation is transformed into a root-finding form f (x) = cos(x) – x3 = 0.
#include <stdio.h>
#include <math.h>
double f(double x)
{
return cos(x) - x*x*x;
}
/* s,t: endpoints of an interval where we search
e: half of upper bound for relative error
m: maximal number of iterations */
double FalsiMethod(double s, double t, double e, int m)
{
double r,fr;
int n, side=0;
/* starting values at endpoints of interval */
double fs = f(s);
double ft = f(t);
for (n = 0; n < m; n++)
{
r = (fs*t - ft*s) / (fs - ft);
if (fabs(t-s) < e*fabs(t+s)) break;
fr = f(r);
if (fr * ft > 0)
{
/* fr and ft have same sign, copy r to t */
t = r; ft = fr;
if (side==-1) fs /= 2;
side = -1;
}
else if (fs * fr > 0)
{
/* fr and fs have same sign, copy r to s */
s = r; fs = fr;
if (side==+1) ft /= 2;
side = +1;
}
else
{
/* fr * f_ very small (looks like zero) */
break;
}
}
return r;
}
int main(void)
{
printf("%0.15f\n", FalsiMethod(0, 1, 5E-15, 100));
return 0;
}
After running this code, the final answer is approximately 0.865474033101614
References
- Katz, Victor J. (1998), A History of Mathematics (2nd ed.), Addison Wesley Longman, p. 15, ISBN 978-0-321-01618-8.
- Smith, D. E. (1958) [1925], History of Mathematics, II, Dover, pp. 437−441, ISBN 978-0-486-20430-7
- Jean-Luc Chabert, ed., A History of Algorithms: From the Pebble to the Microchip(Berlin: Springer, 1999), pp. 86-91.
- Joseph Needham (1 January 1959). Science and Civilisation in China: Volume 3, Mathematics and the Sciences of the Heavens and the Earth. Cambridge University Press. pp. 147–. ISBN 978-0-521-05801-8.
- “Nine chapters”. www-groups.dcs.st-and.ac.uk. Retrieved 2019-02-16.
- Shen Kangshen, John N. Crossley and Anthony W.-C. Lun, 1999. The Nine Chapters on the Mathematical Art: Companion and Commentary. Oxford: Oxford University Press, p. 358.
- Schwartz, R. K. (2004). Issues in the Origin and Development of Hisab al-Khata’ayn (Calculation by Double False Position). Eighth North African Meeting on the History of Arab Mathematics. Radès, Tunisia. Available online at:http://facstaff.uindy.edu/~oaks/Biblio/COMHISMA8paper.doc and “Archived copy”(PDF). Archived from the original (PDF) on 2014-05-16. Retrieved 2012-06-08.
Further Reading
- Richard L. Burden, J. Douglas Faires (2000). Numerical Analysis, 7th ed. Brooks/Cole. ISBN 0-534-38216-9.
- L.E. Sigler (2002). Fibonacci’s Liber Abaci, Leonardo Pisano’s Book of Calculation. Springer-Verlag, New York. ISBN 0-387-40737-5.
Share Now!