Power Iteration Method For Finding Eigenvalues and Eigenvector

by admin in , on June 18, 2019

Power iteration (also known as the power method) is an eigenvalue algorithm: given a diagonalizable matrix A, the algorithm will produce a number lambda, which is the greatest (in absolute value) eigenvalue  of A, and a nonzero vector v, the corresponding eigenvector of lambda, such that A.v = lambda.v. The algorithm is also known as the Von Mises iteration.[1]

Power iteration is a very simple algorithm, but it may converge slowly. It does not compute a matrix decomposition, and hence it can be used when A is a very large sparse matrix.

Example On Using This Code

Input

tol = 1e-6;        % Tolerance
m = 1e6;           % Maximum number of Iterations

A=[1  2  0; 
  -2  1  2; 
   1  3  1];       % Matrix we interested to get from eigenvalue and eigenvector
[n,n2] = size(A);  % dimension of the matrix
x = [1 1 1];
y = zeros(1,n);

Output


The eigenvalue after 16 iterations is:  3.00000139 
The corresponding eigenvector is: 
                                  0.50000006 
                                  0.50000006 
                                  1.00000000

Contents

  • The method
  • Analysis
  • Applications
  • References

The Method

The power iteration algorithm starts with a vector b0, which may be an approximation to the dominant eigenvector or a random vector. The method is described by the recurrence relation

So, at every iteration, the vector bk is multiplied by the matrix A and normalized. If we assume A has an eigenvalue that is strictly greater in magnitude than its other eigenvalues and the starting vector b0 has a nonzero component in the direction of an eigenvector associated with the dominant eigenvalue, then a subsequence bk converges to an eigenvector associated with the dominant eigenvalue. Without the two assumptions above, the sequence bk does not necessarily converge. In this sequence,

where v1 is an eigenvector associated with the dominant eigenvalue, and |r_(k)|–>0. The presence of the term e^(i.phi_(k)) implies that bk does not converge unless e^(i.phi_(k)) = 1. Under the two assumptions listed above, the sequence mu _(k) defined by

converges to the dominant eigenvalue. One may compute this with the following algorithm (shown in Python with NumPy):

#!/usr/bin/python
import numpy as np
def power_iteration(A, num_simulations):
    # Ideally choose a random vector
    # To decrease the chance that our vector
    # Is orthogonal to the eigenvector
    b_k = np.random.rand(A.shape[1])

    for _ in range(num_simulations):
        # calculate the matrix-by-vector product Ab
        b_k1 = np.dot(A, b_k)

        # calculate the norm
        b_k1_norm = np.linalg.norm(b_k1)

        # re normalize the vector
        b_k = b_k1 / b_k1_norm
    return b_k
power_iteration(np.array([[0.5, 0.5], [0.2, 0.8]]), 10)

The vector bk to an associated eigenvector. Ideally, one should use the Rayleigh quotient in order to get the associated eigenvalue. This algorithm is the one used to calculate such things as the Google PageRank. The method can also be used to calculate the spectral radius (the largest eigenvalue of a matrix) by computing the Rayleigh quotient

Analysis

Let A be decomposed into its Jordan canonical form: A = V.J.V^(-1), where the first column of V is an eigenvector of A corresponding to the dominant eigenvalue lambda _(1). Since the dominant eigenvalue of A is unique, the first Jordan block of J is the 1 X 1 matrix [lambda _(1)], where lambda _(1) is the largest eigenvalue of A in magnitude. The starting vector b0 can be written as a linear combination of the columns of V:

By assumption, b0 has a nonzero component in the direction of the dominant eigenvalue, so c_(1) =/= 0. The computationally useful recurrence relation for b_(k+1) can be rewritten as:

where the expression:

is more amenable to the following analysis.

The expression above simplifies as

The limit follows from the fact that the eigenvalue of

is less than 1 in magnitude, so

It follows that:

Using this fact, b_(k) can be written in a form that emphasizes its relationship with v_(1) when k is large:

where

and

as

The sequence b_(k) is bounded, so it contains a convergent subsequence. Note that the eigenvector corresponding to the dominant eigenvalue is only unique up to a scalar, so although the sequence b_(k) may not converge, b_(k) is nearly an eigenvector of A for large k. Alternatively, if A is diagonalizable, then the following proof yields the same result

Let λ1, λ2, …, λm be the m eigenvalues (counted with multiplicity) of A and let v1v2, …, vm be the corresponding eigenvectors. Suppose that lambda _(1) is the dominant eigenvalue, so that

                                                                           

for

.

The initial vector b0 can be written:

If b0 is chosen randomly (with uniform probability), then c1 ≠ 0 with probability 1. Now,

On the other hand:

Therefore, b_(k) converges to (a multiple of) the eigenvector v_(1). The convergence is geometric, with ratio

where lambda _(2) denotes the second dominant eigenvalue. Thus, the method converges slowly if there is an eigenvalue close in magnitude to the dominant eigenvalue.

Applications

Although the power iteration method approximates only one eigenvalue of a matrix, it remains useful for certain computational problems. For instance, Google uses it to calculate the PageRank of documents in their search engine,[2] and Twitter uses it to show users recommendations of who to follow.[3] The power iteration method is especially suitable for sparse matrices, such as the web matrix, or as the matrix-free method that does not require storing the coefficient matrix A explicitly, but can instead access a function evaluating matrix-vector products Ax. For non-symmetric matrices that are well-conditioned the power iteration method can outperform more complex Arnoldi iteration. For symmetric matrices, the power iteration method is rarely used, since its convergence speed can be easily increased without sacrificing the small cost per iteration; see, e.g., Lanczos iteration and LOBPCG.

Some of the more advanced eigenvalue algorithms can be understood as variations of the power iteration. For instance, the inverse iteration method applies power iteration to the matrix A^(-1). Other algorithms look at the whole subspace generated by the vectors b_(k). This subspace is known as the Krylov subspace. It can be computed by Arnoldi iteration or Lanczos iteration.

References

  1. Richard von Mises and H. Pollaczek-Geiringer, Praktische Verfahren der Gleichungsauflösung, ZAMM – Zeitschrift für Angewandte Mathematik und Mechanik 9, 152-164 (1929).
  2. Ipsen, Ilse, and Rebecca M. Wills (5–8 May 2005). “7th IMACS International Symposium on Iterative Methods in Scientific Computing” (PDF). Fields Institute, Toronto, Canada.
  3. Pankaj Gupta, Ashish Goel, Jimmy Lin, Aneesh Sharma, Dong Wang, and Reza Bosagh Zadeh WTF: The who-to-follow system at Twitter, Proceedings of the 22nd international conference on World Wide Web

0 Sale

Share Now!

Release Information

  • Price
    :

    $3.99

  • Released
    :

    June 18, 2019

  • Last Updated
    :

    June 18, 2019

Share Your Valuable Opinions