SVD & EVD of a Symmetric Matrix

I am using svd(.) command for computing the decomposition for my symmetric matrix.
The following page
mentions that for a symmetric matrix, EigenValue Decomposition and SingularValue Decomposition are essentially same. This implies all vectors in either of decompositions should be 'real'.
I have two problems when I use eig(A) and svd(A).
1. At times, few of the eigen vectors are complex. Why do I get these complex eigen vectors. Whereas theoretically it shouldn't be the case. 2. While using svd(.), U'*V is not really an identity matrix. All diagonal entries except last few are not 1. Also, the sign of few comes out to be negative.
Since I am repeating this over randomly generated data, the location of negative diagonal entries of U'*V varies. This is true for (1) as well.
Is this something related to numerical instability?
Thanks

Respuestas (2)

John D'Errico
John D'Errico el 26 de Nov. de 2014
Editada: John D'Errico el 27 de Nov. de 2014
Welcome to the wacky world of floating point arithmetic, which is only occasionally related to mathematics, despite that the two often look so similar. The fact is, what is true in mathematics is often not true on your computer, when done in floating point arithmetic.
It is difficult to know exactly what problems you have encountered since we are not shown what you tried.
Is your matrix truly symmetric? Or does it just look that way to the few decimal places you have displayed?
Is the matrix numerically singular?
Are there replicate eigenvalues?
Eigenvectors are not unique to within a sign change. This is also true for the SVD, so when you form U'*V, there is no reason to expect ones on the diagonal, -1 is entirely possible. In fact I would expect it with some frequency. For example...
A = rand(100);
A = A + A';
[U,S,V] = svd(A);
hist(diag(U'*V))
When I try that, I see half the diagonal elements are -1. I am not even remotely surprised.
That U'*V is not EXACTLY diagonal is also no surprise. Floating point arithmetic happens. Note that the off-diagonal entries are zero to within any meaningful tolerance.

9 comentarios

Nassi
Nassi el 27 de Nov. de 2014
Thank you for your reply. If U is NxN, then R+1 to N singular values are almost zero (R<N). SO yes we can say that we have repeated singular/eigen values.
I understand that U'*V is not exactly diagonal but I do not understand the reason why I am getting complex eigen vectors. That should not be because of floating point arithmetic.
John D'Errico
John D'Errico el 27 de Nov. de 2014
Editada: John D'Errico el 27 de Nov. de 2014
You have not convinced me that your matrix is symmetric. What do you get from:
norm(A - A',inf)
My gut tells me that you merely THINK it is symmetric, because you have created it as such, with a symmetric form. Sigh. Perhaps you still think that theory means something here. For example, lets build a surely symmetric matrix:
Q = orth(rand(50));
A = Q'*diag([rand(30,1);eps*ones(20,1)])*Q;
rank(A)
ans =
30
As one would expect, the rank is 30.
[V,D] = eig(A);
isreal(V)
ans =
0
isreal(D)
ans =
0
As you can see, the eigenvalues and eigenvectors are not real.
norm(A - A',inf)
ans =
4.52762827229947e-16
But in fact, A was not symmetric.
So simply using a form to create A that LOOKED like it would create a symmetric matrix did not truly create one! Again, mathematics and floating point arithmetic need not always coincide. You need to know when not to trust a result.
But if we force symmetry on A...
B = (A + A')/2;
[V,D] = eig(B);
isreal(D)
ans =
1
isreal(V)
ans =
1
Nassi
Nassi el 27 de Nov. de 2014
Editada: Nassi el 27 de Nov. de 2014
for one realization its
norm(A - A',inf) = 4.7523e-14
And similar in order for others
Nassi
Nassi el 27 de Nov. de 2014
Also, although rank(A)=N (A is NxN) implying that its full rank but last N-R singular values are almost zeros i-e in order of e-14,e-15. So that makes me think that A is not full rank and has only R independent columns. This is supported by the R-zero columns of U-V and unit diagonal entries of U'*V.
Also in my reply to Youssef KHMOU, I had mentioned that why identification of true smallest singular value is critical to the optimization problem which I am trying to solve here.
John D'Errico
John D'Errico el 27 de Nov. de 2014
Ok, so it is not truly symmetric. You should know that a non-symmetric matrix will have complex eigenvalues/vectors in general. So why are you surprised at the result?
As for your question about "numerical instability", call it whatever you want. When you are working with singular matrices and playing around with the smallest parts of these matrices, expect random trash. I don't call that instability. I call it a foolish task to do these computations as you are doing and to expect something other than random trash.
Nassi
Nassi el 27 de Nov. de 2014
So you mean to say that a matrix is symmetric only if norm(A - A',inf)=0. Thats true for, both, paper & Matlab ?
This would mean that we can never expect to see symmetric matrices that are generated by using rand(.)
John D'Errico
John D'Errico el 27 de Nov. de 2014
How can you expect to find the "true" smallest singular value for a numerically singular matrix? ZERO is as close as you can come to that estimate.
If your goal is to find the singular vector that corresponds to the smallest non-zero singular value, then discard those that have singular values less than some reasonable tolerance and pick the one before that. WTP? You seem to be making a big fuss over whether you can do that.
Nassi
Nassi el 27 de Nov. de 2014
I am doing truncation. Thank you for your input.
John D'Errico
John D'Errico el 27 de Nov. de 2014
Editada: John D'Errico el 27 de Nov. de 2014
If you generate a matrix using floating point operations and linear algebra, it is a very good chance that indeed, it will NOT be truly symmetric. And in that case, eig need not yield entirely real eigenvalues & vectors. Sorry, but you need to learn this fact. You CAN symmetrize a matrix, as I showed you how.
Regardless, given a rank deficient matrix A, you are looking for the vector x such that x'*A*x is minimized.
Again, A is rank deficient!!!!!!!!!
ANY linear combination of the vectors in the nullspace of A will produce a minimum, ZERO! At least, it will be zero to within a tolerance of floating point trash.
Ok, so maybe you really want to find the unique x such that x'*A*x is greater than zero, yet still minimized. But A is singular! So you can add any linear combination of the nullspace vectors to x. I.e., there is no unique minimum.
You are worrying about the difference between what svd and eig return for a singular matrix here. Of course, since there are replicate zero singular values/eigenvalues at zero, those corresponding vectors are not unique.
Perhaps it is time to revisit a basic course in linear algebra. Maybe take a basic course in numerical linear algebra.

Iniciar sesión para comentar.

Youssef  Khmou
Youssef Khmou el 27 de Nov. de 2014
Editada: Youssef Khmou el 27 de Nov. de 2014
Floating points problems occur as mentioned by @Jhon, you can verify if your matrix M is symmetric or not :
norm(M-M') % must be 0
Another way to see the floating point example, is via unitary matrix U, theory says U'=inv(U) if you try
norm(U'-inv(U))
you get a value around e-14 or e-15 .

3 comentarios

Nassi
Nassi el 27 de Nov. de 2014
When I try norm(U'-inv(U)), I get a value around e-15. Same is true for norm(M-M').
Since I am trying to find vector a that minimize trace(a'*A*a), where A=U*S*V', I was interested in picking up the vector that corresponds to smallest singular value. But if I do that I often end up with negative trace, which is not correct.
When I do U-V, where each is NxN, first 'R' columns are zero (R<N). From R+1 to N columns in difference have significant non-zero terms. So I was thinking to pick the a = U_R.
But I needed to know if I can use the argument of 'numerical instability' for making such a selection.
John D'Errico
John D'Errico el 27 de Nov. de 2014
If your matrix is numerically singular, and you use the singular vector with minimum singular value, then you should expect to get numerical trash, which will randomly be negative or positive, near zero.
Theory is not relevant when you are using floating point arithmetic in this case.
Nassi
Nassi el 27 de Nov. de 2014
Editada: Nassi el 27 de Nov. de 2014
Yes I agree. So this is why I am using U-V to find out the location of smallest non-zero singular value….something similar to SVD truncation. But what puzzles me more are the complex eigen vectors which I am getting for my symmetric matrix A. I can't figure out the reason for that.

Iniciar sesión para comentar.

Categorías

Más información sobre Linear Algebra en Centro de ayuda y File Exchange.

Etiquetas

Preguntada:

el 26 de Nov. de 2014

Editada:

el 27 de Nov. de 2014

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by