Efficiently calculating the trace of a matrix product
9 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
Henry Shackleton
el 9 de Mayo de 2019
Comentada: James Tursa
el 10 de Mayo de 2019
I have two NxN square matrices, A and B, and I would like to calculate the trace of AB. Since the trace of AB only depends on its diagonal elements, it should hypothetically not be necessary to compute all of AB, thereby reducing the amount of operations from N^3 to N^2. My question is twofold:
- Does calling tr(AB) in MATLAB automatically exploit this fact?
- If not, is there an efficient way of doing this that doesn't involve calling for loops?
Thanks!
0 comentarios
Respuesta aceptada
Matt J
el 9 de Mayo de 2019
Editada: Matt J
el 9 de Mayo de 2019
Bt=B.';
traceProduct = A(:).'*Bt(:);
4 comentarios
Matt J
el 10 de Mayo de 2019
Another way to test if the trace product is JIT optimized is to compare both implementations on a large matrix,
A=rand(3000); B=A;
tic;
version1 =trace(A*B);
toc;
%Elapsed time is 0.956904 seconds.
tic;
version2 = A(:).'*reshape(B.',[],1);
toc;
%Elapsed time is 0.068032 seconds.
It is pretty clear that the direct implementation is not optimized.
James Tursa
el 10 de Mayo de 2019
I get the same results as Matt on various versions. And even if there is some version (maybe future) of MATLAB that does this optimization, it is still only the physical transpose part that could beat the direct implementation above ... i.e., compiled code that avoids physically forming the transpose.
Más respuestas (0)
Ver también
Categorías
Más información sobre Logical en Help Center y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!