Next  |  Prev  |  Top  |  JOS Index  |  JOS Pubs  |  JOS Home  |  Search

Eigenvalue sensitivity example

In this example, we construct a matrix whose eigenvalues are moderately sensitive to perturbations and then analyze that sensitivity. We begin with the statement

   B = <3 0 7; 0 2 0; 0 0 1>
which produces
   B     =

       3.    0.    7.
       0.    2.    0.
       0.    0.    1.

Obviously, the eigenvalues of B are 1, 2 and 3 . Moreover, since B is not symmetric, these eigenvalues are slightly sensitive to perturbation. (The value b(1,3) = 7 was chosen so that the elements of the matrix A below are less than 1000.)

We now generate a similarity transformation to disguise the eigenvalues and make them more sensitive.

   L = <1 0 0; 2 1 0; -3 4 1>, M = L\L'

   L     =

       1.    0.    0.
       2.    1.    0.
      -3.    4.    1.

   M     =

       1.0000    2.0000   -3.0000
      -2.0000   -3.0000   10.0000
      11.0000   18.0000  -48.0000
The matrix M has determinant equal to 1 and is moderately badly conditioned. The similarity transformation is
   A = M*B/M

   A     =

     -64.0000   82.0000   21.0000
     144.0000 -178.0000  -46.0000
    -771.0000  962.0000  248.0000
Because det(M) = 1 , the elements of A would be exact integers if there were no roundoff. So,
   A = round(A)

   A     =

     -64.   82.   21.
     144. -178.  -46.
    -771.  962.  248.

This, then, is our test matrix. We can now forget how it was generated and analyze its eigenvalues.

   <X,D> = eig(A)

   D     =

       3.0000    0.0000    0.0000
       0.0000    1.0000    0.0000
       0.0000    0.0000    2.0000

   X     =

       -.0891    3.4903   41.8091
        .1782   -9.1284  -62.7136
       -.9800   46.4473  376.2818
Since A is similar to B, its eigenvalues are also 1, 2 and 3. They happen to be computed in another order by the EISPACK subroutines. The fact that the columns of X, which are the eigenvectors, are so far from being orthonormal is our first indication that the eigenvalues are sensitive. To see this sensitivity, we display more figures of the computed eigenvalues.
   long, diag(D)

   ANS   =

      2.999999999973599
      1.000000000015625
      2.000000000011505
We see that, on this computer, the last five significant figures are contaminated by roundoff error. A somewhat superficial explanation of this is provided by
   short,  cond(X)

   ANS   =

      3.2216e+05
The condition number of X gives an upper bound for the relative error in the computed eigenvalues. However, this condition number is affected by scaling.
   X = X/diag(X(3,:)),  cond(X)

   X     =

        .0909     .0751     .1111
       -.1818    -.1965    -.1667
       1.0000    1.0000    1.0000

   ANS   =

      1.7692e+03

Rescaling the eigenvectors so that their last components are all equal to one has two consequences. The condition of X is decreased by over two orders of magnitude. (This is about the minimum condition that can be obtained by such diagonal scaling.) Moreover, it is now apparent that the three eigenvectors are nearly parallel.

More detailed information on the sensitivity of the individual eigenvalues involves the left eigenvectors.

   Y = inv(X'),  Y'*A*X

   Y     =

    -511.5000  259.5000  252.0000
     616.0000 -346.0000 -270.0000
     159.5000  -86.5000  -72.0000

   ANS   =

       3.0000     .0000     .0000
        .0000    1.0000     .0000
        .0000     .0000    2.0000
We are now in a position to compute the sensitivities of the individual eigenvalues.
   for j = 1:3, c(j) = norm(Y(:,j))*norm(X(:,j)); end,  C

   C     =

     833.1092
     450.7228
     383.7564
These three numbers are the reciprocals of the cosines of the angles between the left and right eigenvectors. It can be shown that perturbation of the elements of A can result in a perturbation of the j-th eigenvalue which is c(j) times as large. In this example, the first eigenvalue has the largest sensitivity.

We now proceed to show that A is close to a matrix with a double eigenvalue. The direction of the required perturbation is given by

   E = -1.e-6*Y(:,1)*X(:,1)'

   E     =

      1.0e-03 *

        .0465    -.0930     .5115
       -.0560     .1120    -.6160
       -.0145     .0290    -.1595
With some trial and error which we do not show, we bracket the point where two eigenvalues of a perturbed A coalesce and then become complex.
   eig(A + .4*E),  eig(A + .5*E)

   ANS   =

       1.1500
       2.5996
       2.2504

   ANS   =

      2.4067 +  .1753*i
      2.4067 -  .1753*i
      1.1866 + 0.0000*i
Now, a bisecting search, driven by the imaginary part of one of the eigenvalues, finds the point where two eigenvalues are nearly equal.
   r = .4;  s = .5;

   while s-r > 1.e-14, t = (r+s)/2; d = eig(A+t*E); ...
     if imag(d(1))=0, r = t; else, s = t;

   long,  T

   T     =

        .450380734134507

Finally, we display the perturbed matrix, which is obviously close to the original, and its pair of nearly equal eigenvalues. (We have dropped a few digits from the long output.)

   A+t*E,  eig(A+t*E)

   A

    -63.999979057   81.999958114   21.000230369
    143.999974778 -177.999949557  -46.000277434
   -771.000006530  962.000013061  247.999928164

   ANS   =

      2.415741150
      2.415740621
      1.168517777

The first two eigenvectors of A + t*E are almost indistinguishable indicating that the perturbed matrix is almost defective.

   <X,D> = eig(A+t*E);  X = X/diag(X(3,:))

   X     =

       .096019578     .096019586    .071608466
      -.178329614    -.178329608   -.199190520
      1.000000000    1.000000000   1.000000000

   short,  cond(X)

   ANS   =

      3.3997e+09


Next  |  Prev  |  Top  |  JOS Index  |  JOS Pubs  |  JOS Home  |  Search