Eigenvalues and Eigenvector computation on extremely bad conditioned matrices

42 visualizaciones (últimos 30 días)
Hello everyone, I have a question about the eig function for computing the eigenvalues and eigenvector. I have a couple of matrices A and P, for which I want to solve the general eigenproblem A v = P lambda v: these matrices are 160x160, singular, not symmetric and extremely bad conditioned (cond(A) = 1e18, cond(P) = 1e28, rcond(A) = rcond(P) = 0), and with low rank (rank(A) = 150, rank(P) = 120). Clearly, being singular, i can not invert P and compute a standard eigenproblem inv(P)*A v = I lambda v, so i have to call the function eig with the general problem:
eig(A,P), which automatically choose the qz algorithm to solve the eigenvalues.
Unfortunately, more or less 40 of this poles are inf + 0i, and in some cases A v is not equal to P lambda v, so the problem is not solved with precision. The eigenvectors that correspond to the inf eigenvalues shows that in that case a non infinite eigenvalues can exist, of the order of 10^15: so why the eig function does not compute them?
I have tried to balance, normalized each rows of A, P for decrease the conditioning number, but the result does not change.
I have tried to use the pseudoinverse matrix of P for solving a standard problem eig(pinv(P)*A), and in this case I have no inf eigenvalues: however, I am not sure if the eigenvalues compute with this method are a solution of my original problem: can I solve them in this way? With a correct methodology I should solve eig(pinv(P)*A, pinv(P)*P), but the second matrix remains singular and so inf eigenvalues are computed also in this case.
Has anyone an idea of how solve this bad-conditioned problem?
Thanks in advance

Respuestas (2)

Sam Chak
Sam Chak el 5 de Oct. de 2023
You may consider employing the balance() function to execute diagonal scaling on the ill-conditioned matrix, thereby enhancing the conditioning of the eigenvalue problem. If this approach proves ineffective, it becomes necessary to resort to specialized algorithms such as the Implicitly Restarted Arnoldi Method (IRAM), which excel in dealing with ill-conditioned matrices.
Radke's Master thesis, presenting the MATLAB implementation of IRAM for tackling large-scale eigenvalue problems, is accessible via the following link:
% Ill-conditioned matrix
A = [-1 1e-2 1e-4;
1e+2 -1 1e-2;
1e+4 1e+2 -1];
[Va, Ea] = eig(A)
Va = 3×3
-0.0002 0.0001 -0.0000 0.0100 0.0100 -0.0068 0.9999 0.9999 1.0000
Ea = 3×3
-2.0000 0 0 0 1.0000 0 0 0 -2.0000
cond(Va)
ans = 8.1651e+03
[T, B] = balance(A)
T = 3×3
0.0020 0 0 0 0.1250 0 0 0 16.0000
B = 3×3
-1.0000 0.6400 0.8192 1.5625 -1.0000 1.2800 1.2207 0.7812 -1.0000
[Vb, Eb] = eig(B)
Vb = 3×3
-0.7102 0.4503 -0.1918 0.5548 0.7036 -0.6460 0.4334 0.5497 0.7388
Eb = 3×3
-2.0000 0 0 0 1.0000 0 0 0 -2.0000
cond(Vb)
ans = 1.4547
  7 comentarios
Torsten
Torsten el 6 de Oct. de 2023
It's hard to say, but I think you must accept: garbage in, garbage out.
Sam Chak
Sam Chak el 6 de Oct. de 2023
Previously, I mentioned the Arnoldi method. The Lanczos algorithm is quite well-known for the computation of eigenvalues. However, it appears that your interest lies not in developing the algorithm from scratch in MATLAB but in utilizing eigenvalue-finding tools from algorithm libraries capable of efficiently handling sparse matrices.
My colleague recommends using SuiteSparse, as it should prove effective for relatively small to moderately sized sparse matrices, such as the one you have (). You can find more information here:

Iniciar sesión para comentar.


Christine Tobler
Christine Tobler el 9 de Oct. de 2023
If the matrix P has rank 120 and its size is 160, you should expect 40 eigenvalues to be Inf - this is how singularities in the second input matrix are being represented by generalized eig.
For the simple case where A and P are both diagonal, each eigenvalue would be just A(i, i) ./ P(i, i). So if there is a diagonal element of P that is zero, its eigenvalue is going to be Inf. This is usually fine, in many practical problems the Inf eigenvalues can simply be ignored.
So my question would be, is it really a problem that some of the computed eigenvalues are Inf? That probably depends on what your next steps are going to be with those computed results.

Categorías

Más información sobre Linear Algebra en Help Center y File Exchange.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by