 # Broyden’s Method (Quasi Newton method) for Solving System of Nonlinear Equations

by admin in on June 13, 2019

Broyden’s method is a quasi-Newton method for finding roots in k variables. It was originally described by C. G. Broyden in 1965. Newton’s method for solving f(x) = 0 uses the Jacobian matrix, J, at every iteration. However, computing this Jacobian is a difficult and expensive operation. The idea behind Broyden’s method is to compute the whole Jacobian only at the first iteration and to do rank-one updates at other iterations.

In 1979 Gay proved that when Broyden’s method is applied to a linear system of size n × n, it terminates in n steps, although like all quasi-Newton methods, it may not converge for nonlinear systems.

One of the powers of this method is that you can add bounds on the variables of the equations.

### Example on Using This Code:

Input

f = @(x) [x(1)^2 + x(2)^2 - 4; exp(x(1)) + x(2) - 1]; % Equations we want to solve
x0 = [1;1];                          % Initial conditions
opt= [];                             % Options struct. Fields {tolfun or tolx}
bounds = [0 2.5;-2.5 0];             % Bounds on x
[X ithist] = broyden(f,x0,opt,bounds)
% The root has x(1) > 0, x(2) < 0. This may be foundby selecting another inital x, or by using constraints.

Output  Roots X =
1.004168738474574
-1.729637287025508
ithist =
struct with fields:
x: [23×2 double]
f: [23×2 double]
normf: [23×1 double]  ### Contents

• Description of the method
• Solving single-variable equation
• Solving a system of nonlinear equations
• Other members of the Broyden class
• References

### Description of the Method

#### Solving Single-Variable Equation

In the secant method, we replace the first derivative f at xn with the finite-difference approximation: and proceed similar to Newton’s method: where n is the iteration index.

#### Solving a System of Nonlinear Equations

Consider a system of k nonlinear equations where f is a vector-valued function of vector x:  For such problems, Broyden gives a generalization of the one-dimensional Newton’s method, replacing the derivative with the Jacobian J. The Jacobian matrix is determined iteratively, based on the secant equation in the finite-difference approximation: where n is the iteration index. For clarity, let us define:   so the above may be rewritten as The above equation is underdetermined when k is greater than one. Broyden suggests using the current estimate of the Jacobian matrix Jn−1 and improving upon it by taking the solution to the secant equation that is a minimal modification to Jn−1: This minimizes the following Frobenius norm: We may then proceed in the Newton direction: Broyden also suggested using the Sherman–Morrison formula to update directly the inverse of the Jacobian matrix: This first method is commonly known as the “good Broyden’s method”.

A similar technique can be derived by using a slightly different modification to Jn−1. This yields a second method, the so-called “bad Broyden’s method” (but see): This minimizes a different Frobenius norm: Many other quasi-Newton schemes have been suggested in optimization, where one seeks a maximum or minimum by finding the root of the first derivative (gradient in multiple dimensions). The Jacobian of the gradient is called Hessian and is symmetric, adding further constraints to its update.

### Other Members of the Broyden Class

Broyden has defined not only two methods, but a whole class of methods. Other members of this class have been added by other authors.

• The Davidon–Fletcher–Powell update is the only member of this class being published before the two members defined by Broyden.
• Schubert’s or sparse Broyden algorithm – a modification for sparse Jacobian matrices.
• Klement (2014) – uses fewer iterations to solve many equation systems.

Share Now!

#### Release Information

• Price
:

\$6.99

• Released
:

June 13, 2019

• Last Updated
:

June 13, 2019