Using GPU with built-in machine learning functons
2 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
I wanted to check the speed-up of my classification task when using the GPU instead of the CPU. More specifically I tried something like:
tic
NB=fitcnb(gpuArray(train_values), train_labels,'KFold',5,'CrossVal','on');
kfoldLoss(NB)
time_upNB=toc;
but I am getting a "Conversion to double from gpuArray is not possible." error. Is my syntax wrong or is this not possible?
3 comentarios
Astarag Chattopadhyay
el 4 de Jun. de 2018
Hi Tasos,
It is hard to tell from which part of the code you are getting this error. In general, this error occurs when you try to save any gpuArray calculation result to a variable. As an example
A = magic(5);
Agpu = gpuArray(A);
B = zeros(5);
for i = 1:5
B(i,i) = Agpu(i,i) * Agpu(i,i);
end
This code snippet will throw the same error as you are getting. You need to preallocate B as gpuArray to workaround this:
iA = magic(5);
Agpu = gpuArray(A);
Bgpu = gpuArray(zeros(5));
for i = 1:5
Bgpu(i,i) = Agpu(i,i) * Agpu(i,i);
end
I will suggest you put breakpoints in your code to see where you are getting the error and most probably you need do a gpuArray conversion at that line of code.
Respuestas (0)
Ver también
Categorías
Más información sobre GPU Computing en Help Center y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!