Accelarating image processing algorithm on GPU
1 visualización (últimos 30 días)
Mostrar comentarios más antiguos
Hi, I want to accelerate my algorithm because I need to run it on hundreds of images,so I tried to use unvectorized GPU code, running the same code on GPU, I have nvidia Geforce GT 650M with 2 GB on my PC, however it was very slow than the CPU version. After searching I am convinced to pass to vectorized GPU code using batch process (pagefun, bsxfun), I tried so much to solve this problem without a solution. can some one help me about this code:
Q=100;
for i=3:n-2
for j=3:m-2
A(i,j)=0;
for c=1:Q
if B(i,j,c)~=0
A(i,j)=A(i,j)+(-(B(i,j,c)).*log(B(i,j,c)));
end
end
end
end
Another question Why Matlab uses just 20% of my CPU? How I can take benefits of my CPU to accelerate my processing
Is Matlab a single threaded app?
Thanks in advance
0 comentarios
Respuestas (0)
Ver también
Categorías
Más información sobre GPU Computing en Help Center y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!