Borrar filtros
Borrar filtros

access time of data in cell array vs matrix

24 visualizaciones (últimos 30 días)
Matt
Matt el 19 de Oct. de 2023
Editada: Matt J el 23 de Oct. de 2023
Hello
I am doing operations (denoising/detection etc) on images and it seems that it is faster to store them in a cell array rather than a matrix to access them. It comes as a surpise to me. I give an exemple right under doing convolutions.
What is the reason for this ?
% create data
N_img = 1000;
img = rand(512,512,N_img);
for ii=size(img,3):-1:1
img_cell{ii}=img(:,:,ii);
end
% comparison on convolutions
tic
out_cell = img_cell;
for ii=1:N_img
out_cell{ii} = imgaussfilt(img_cell{ii});
end
toc
% I get 1.6s-1.8s
clear out_cell
tic
out_mat = img; %allocation inside or out of the timer does not change the trend
for ii=1:N_img
out_mat(:,:,ii) = imgaussfilt(img(:,:,ii));
end
toc
clear out_mat
% I get ~3.2 s
Sincerely,
  1 comentario
Matt J
Matt J el 19 de Oct. de 2023
Editada: Matt J el 19 de Oct. de 2023
I would take the imgaussfilt out of there to see the difference in access times more transparently:
% create data
N_img = 1000;
img = rand(512,512,N_img);
img_cell=num2cell(img,[1,2]);
tic
out_cell = img_cell;
for ii=1:N_img
out_cell{ii} = img_cell{ii};
end
toc
Elapsed time is 0.015706 seconds.
tic
out_mat = img;
for ii=1:N_img
out_mat(:,:,ii) = img(:,:,ii);
end
toc
Elapsed time is 1.547411 seconds.

Iniciar sesión para comentar.

Respuesta aceptada

Walter Roberson
Walter Roberson el 19 de Oct. de 2023
Editada: Walter Roberson el 19 de Oct. de 2023
There is a useful command "format debug". Unfortunately it does not work in Livescript, so to run it here I have to use the hack of evalc
cmd = "format debug, A = rand(2,3), B = rand(2,3), C{1} = A, C{2} = B, T1 = C{1}, T2 = C{2}"
cmd = "format debug, A = rand(2,3), B = rand(2,3), C{1} = A, C{2} = B, T1 = C{1}, T2 = C{2}"
evalc(cmd)
ans =
' A = Structure address = 7febc99a6260 m = 2 n = 3 pr = 7febccd84400 0.8732 0.3611 0.2854 0.4290 0.5620 0.9112 B = Structure address = 7febc98cc8e0 m = 2 n = 3 pr = 7febce7bf6a0 0.9777 0.4152 0.7182 0.0407 0.0746 0.2219 C = 1×1 cell array {2×3 double} C = 1×2 cell array {2×3 double} {2×3 double} T1 = Structure address = 7febc9047dc0 m = 2 n = 3 pr = 7febccd84400 0.8732 0.3611 0.2854 0.4290 0.5620 0.9112 T2 = Structure address = 7febc9044700 m = 2 n = 3 pr = 7febce7bf6a0 0.9777 0.4152 0.7182 0.0407 0.0746 0.2219 '
Notice that the pr (data pointer) of A is the same as the pr of T1 -- so storing an array into a cell array keeps the same data pointer (no data copying), and retrieving it from the cell array keeps the same data pointer.
Therefore if you use a per-slice cell array, then retrieving each slice involves only creation of a temporary variable header without copying the data. But if you use (:,:,i) indexing of a 3D matrix then MATLAB needs to create a new data block of the appropriate size and copy the slice of the array into it.
Now, when you create a cell array, each cell array that has never had anything written into it uses an 8 byte pointer set to binary 0 and no further storage, and MATLAB knows to interpret that as "slot holds an empty double". If, however, you have ever assigned anything to the slot, then it needs the 8 byte pointer and another 96 bytes of header information describing the size and class of the data, and then the actual block of memory. So for each used slot of a cell array, there is 104 bytes of overhead beyond the data storage.
Therefore storing a 3D array as a block takes 104 bytes plus storage for the total number of elements in the array, whereas storign the same array as a cell array per slice takes 104 bytes for the cell, plus 104 bytes times the number of slices, plus storage for the totla number of elements in the array.
(Although, in some cases if the amount of memory per slice is small enough, each slice might end up stored inside a fixed-sized block... wasting the memory between the end of the used area and the end of the fixed-sized block.)
Remember too that if you do store in a cell array per-slice then there is overhead time in setting up all of those slices. Your measurement code is only timing retrieval not creation.
I have seen some hints that the last couple of releases, MATLAB has had a way to create references to sections inside an array instead of having to copy the data block. However the hints have not been clear enough for me to have a grasp of how such references are created or maintained, or what their restrictions are... for example maybe they can only exist for data slices that happen to use all memory within particular offsets, such as a single 2D slice of a 3D array.
  9 comentarios
Matt J
Matt J el 23 de Oct. de 2023
Editada: Matt J el 23 de Oct. de 2023
So either I use cells, and lose the benefit of the fast matrix calculations
It can be hard to tell when this will really matter, e.g.,
img=rand(512,512,1e3);
tic;
img=img./mean(img,[1,2]);
toc
Elapsed time is 0.242101 seconds.
img=num2cell(img,[1,2]);
tic;
for i=1:numel(img)
img{i}=img{i}./mean(img{i}(:));
end
toc
Elapsed time is 0.213300 seconds.
Matt
Matt el 23 de Oct. de 2023
Editada: Matt el 23 de Oct. de 2023
Regarding you example it looks like in need to updage my matlab. On matlab 2020a I go from 0.1s with matrix to 2.5s with cells running exactly your code.
And on the third dimensions with 1e4 images on matlab 2020a I go from 1s to 22s using the following code (reduced to 1e3 images to run here).
img=rand(512,512,1e3);
tic;
img=img-mean(img,3);
toc
Elapsed time is 0.460275 seconds.
img=num2cell(img,[1,2]);
tic;
mean_im = zeros(size(img{1}));
for i=1:numel(img)
mean_im=mean_im+img{i};
end
mean_im = mean_im/length(img);
for i=1:numel(img)
img{i}=img{i}-mean_im;
end
toc
Elapsed time is 0.326576 seconds.
Edit : upgraded to 2023b and those 2 examples are still 20x faster vectorised than with cells on my pc - unlike when executing here in a navigator.

Iniciar sesión para comentar.

Más respuestas (1)

Matt J
Matt J el 19 de Oct. de 2023
Editada: Matt J el 19 de Oct. de 2023
Because extracting from cell arrays involves no new memory allocation. And because N_img=1000 is still very small.
  8 comentarios
James Tursa
James Tursa el 20 de Oct. de 2023
Editada: James Tursa el 20 de Oct. de 2023
@Dyuman Joshi "Matt, could you provide a reference for this?"
I doubt there is an official reference for this, but that is how it has always worked for cell and struct and property extraction. You get shallow copies (shared data or reference), not deep copies.
Walter Roberson
Walter Roberson el 23 de Oct. de 2023
Editada: Matt J el 23 de Oct. de 2023
This is an application of copy-on-write which is a fundamental MATLAB memory mechanism.

Iniciar sesión para comentar.

Categorías

Más información sobre Performance and Memory en Help Center y File Exchange.

Etiquetas

Productos


Versión

R2022b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by