Golden Section Search Optimization (Finding Min. and Max.)

by admin in , , on April 26, 2019

This code uses Golden Section Search method to find the optimum value of a given function within a predefined interval.

Code Outputs:

  • Optimal Value Location
  • Optimal Value itself
  • The Accuracy of Result
  • Number of Iterations
  • Chart for the Position of the Optimal Value on the Given Function
  • Error Chart
  • Chart of the Optimal Value Convergence
  • Printed Values for (Optimal Value and Possition, Accuracy and Number of Iterations)

The Optimal Value is at: 1.4277 ,is: 1.7757 , NO Iterations: 19 , accuracy: 0.00081713

 

Required Inputs:

  • The function Wanted to be Optimized
  • The Interval that you expect the optimal value located within it (make it large it you don’t know it)
  • Minimal Accuracy of the Result
  • Maximum Number of Iterations (The code has a self breaking algorithm when finding the answer)

About the Method:

golden-section search is a technique for finding the extremum (minimum or maximum) of a strictly unimodal function by successively narrowing the range of values inside which the extremum is known to exist. The technique derives its name from the fact that the algorithm maintains the function values for triples of points whose distances form a golden ratio. The algorithm is the limit of Fibonacci search (also described below) for a large number of function evaluations. Fibonacci search and golden-section search were discovered by Kiefer (1953) (see also Avriel and Wilde (1966)).

Basic Idea:

The discussion here is posed in terms of searching for a minimum (searching for a maximum is similar) of a unimodal function. Unlike finding a zero, where two function evaluations with opposite sign are sufficient to bracket a root, when searching for a minimum, three values are necessary. The golden-section search is an efficient way to progressively reduce the interval locating the minimum. The key is to observe that regardless of how many points have been evaluated, the minimum lies within the interval defined by the two points adjacent to the point with the least value so far evaluated.

The diagram above illustrates a single step in the technique for finding a minimum. The functional values of f(x) are on the vertical axis, and the horizontal axis is the xparameter. The value of f(x) has already been evaluated at the three points: x1, x2, and x3. Since f2 is smaller than either f1 or f3, it is clear that a minimum lies inside the interval from x1 to x3.

The next step in the minimization process is to ‘probe’ the function by evaluating it at a new value of x, namely x4. It is most efficient to choose x4 somewhere inside the largest interval, i.e. between x2 and x3. From the diagram, it is clear that if the function yields f4a, then a minimum lies between x1 and x4, and the new triplet of points will be x1, x2, and x4. However, if the function yields the value f4b, then a minimum lies between x2 and x3, and the new triplet of points will be x2, x4, and x3. Thus, in either case, we can construct a new narrower search interval that is guaranteed to contain the function’s minimum.

Probe Point Selection:

From the diagram above, it is seen that the new search interval will be either between x1 and x4 with a length of a + c, or between x2 and x3 with a length of b. The golden-section search requires that these intervals be equal. If they are not, a run of ‘bad luck’ could lead to the wider interval being used many times, thus slowing down the rate of convergence. To ensure that b = a + c, the algorithm should choose x4 = x1 + (x3 – x2).

However, there still remains the question of where x2 should be placed in relation to x1 and x3. The golden-section search chooses the spacing between these points in such a way that these points have the same proportion of spacing as the subsequent triple x1, x2, x4 or x2, x4, x3. By maintaining the same proportion of spacing throughout the algorithm, we avoid a situation in which x2 is very close to x1 or x3 and guarantee that the interval width shrinks by the same constant proportion in each step.

Mathematically, to ensure that the spacing after evaluating f(x4) is proportional to the spacing prior to that evaluation, if f(x4) is f4a and our new triplet of points is x1, x2, and x4, then we want

However, if  f(x4) is f4a  and our new triplet of points is x2, x4, and x3, then we want

Eliminating c from these two simultaneous equations yields

or

where φ is the golden ratio:

The appearance of the golden ratio in the proportional spacing of the evaluation points is how this search algorithm gets its name.

Termination Condition:

Because smooth functions are flat (their first derivative is close to zero) near a minimum, attention must be paid not to expect too great an accuracy in locating the minimum. The termination condition provided in the book Numerical Recipes in C is based on testing the gaps among x1, x2, x3 and x4, terminating when within the relative accuracy bounds

where tau is a tolerance parameter of the algorithm, and |x| is the absolute value of x. The check is based on the bracket size relative to its central value, because that relative error in is approximately proportional to the squared absolute error in f(x) in typical cases. For that same reason, the Numerical Recipes text recommends that tau = sqrt(epsilon), where is the required absolute precision of f(x).

Algorithm

Iterative algorithm

  • Let [ab] be interval of current bracket. f(a), f(b) would already have been computed earlier. phi = (1 + sqrt(5))/2.
  • Let c = b – (b – a)/φ , d = a + (b – a)/φ. If f(c), f(d) not available, compute them.
  • If f(c) < f(d) (this is to find min, to find max, just reverse it) then move the data: (bf(b)) ← (df(d)), (df(d)) ← (cf(c)) and update c = b – (b – a)/φ and f(c);
  • otherwise, move the data: (af(a)) ← (cf(c)), (cf(c)) ← (df(d)) and update d = a + (b − a)/φ and f(d).
  • At the end of the iteration, [acdb] bracket the minimum point.

References:

[1] Kiefer, J. (1953), ‘Sequential minimax search for a maximum’, Proceedings of the American Mathematical Society4 (3): 502–506, doi:10.2307/2032161, JSTOR 2032161, MR 0055639

[2] Avriel, Mordecai; Wilde, Douglass J. (1966), ‘Optimality proof for the symmetric Fibonacci search technique’, Fibonacci Quarterly4: 265–269, MR 0208812

[3] Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007), ‘Section 10.2. Golden Section Search in One Dimension’, Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press, ISBN 978-0-521-88068-8

6 Sales

Share Now!

Release Information

  • Price
    :

    $4.99

  • Released
    :

    April 26, 2019

  • Last Updated
    :

    May 29, 2019

Share Your Valuable Opinions