I made a function that calculates autocorrelation using partial matrices. When I tried to verify it by comparing with autocorrelation using whole matrices, I thought I had made a mistake, but then I found out Matlab gives different results depending on how the matrix is separated into parts. However, there should be no difference between the calculations!
Now, do not give me mere floating point arithmetic as the explanation: It does NOT explain this difference.
Here is a very simple test that shows the problem:
a1=randn(100,1); a2=randn(100,1);
A=[a1 a2];
R0=A'*A;
R1=[a1 a2]'*[a1 a2];
R2=[a1'*a1 a1'*a2; a2'*a1 a2'*a2];
[R0-R1 R2-R1]
ans =
1.0e-13 *
0.1421 -0.0089 0.2842 -0.0089
-0.0089 -0.2842 -0.0089 0
If you know anything about matrix algebra, you should know that R0 is internally calculated EXACTLY in the same way as R1 and R2 above. So, floating point arithmetic as such does not explain the difference.
So, obviously Matlab must be rounding the intermediate results differently in these cases, but I cannot understand WHY they are rounded differently? There is no sensible reason in these cases to do it differently. Mathworks, why you are doing this? It just makes all verification more difficult when the results are not what they are supposed to be.
1 Comment
Direct link to this comment
https://es.mathworks.com/matlabcentral/answers/723553-matrix-multiplication-accuracy-problem#comment_1278553
Direct link to this comment
https://es.mathworks.com/matlabcentral/answers/723553-matrix-multiplication-accuracy-problem#comment_1278553
Sign in to comment.