Mask RCNN issue of function "convolveBackwardDataND" by using GPU

When I following the instructions in "Instance Segmentation Using Mask R-CNN Deep Learning", it can not work by the section of train the network
initialLearnRate = 0.0012;
momentum = 0.9;
decay = 0.01;
velocity = [];
maxEpochs = 1;% the default value is 10
miniBatchSize = 1;
%% Batch Training Data
% Create a <docid:nnet_ref#mw_412afe49-5c0e-4f1b-8df7-b229f1b07eec |minibatchqueue|>
% object that manages the mini-batching of observations in a custom training loop.
% The |minibatchqueue| object also casts data to a <docid:nnet_ref#mw_bc7bf07e-0207-40d7-8568-5bdd002c6390
% |dlarray|> object that enables automatic differentiation in deep learning applications.
%
% Define a custom batching function named |miniBatchFcn|. The images are concatenated
% along the fourth dimension to get an _H_-by-_W_-by-_C_-by-_miniBatchSize_ shaped
% batch. The other ground truth data is configured a cell array of length equal
% to the mini-batch size.
miniBatchFcn = @(img,boxes,labels,masks) deal(cat(4,img{:}),boxes,labels,masks);
%%
% Specify the mini-batch data extraction format for the image data as "|SSCB"|
% (spatial, spatial, channel, batch). If a supported GPU is available for computation,
% then the |minibatchqueue| object preprocesses mini-batches in the background
% in a parallel pool during training.
mbqTrain = minibatchqueue(dsTrain,4, ...
"MiniBatchFormat",["SSCB","","",""], ...
"MiniBatchSize",miniBatchSize, ...
"OutputCast",["single","","",""], ...
"OutputAsDlArray",[true,false,false,false], ...
"MiniBatchFcn",miniBatchFcn, ...
"OutputEnvironment",["auto","cpu","cpu","cpu"]);
doTraining = true;
if doTraining
iteration = 1;
start = tic;
% Create subplots for the learning rate and mini-batch loss
fig = figure;
[lossPlotter, learningratePlotter] = helper.configureTrainingProgressPlotter(fig);
% Initialize verbose output
helper.initializeVerboseOutput([]);
% Custom training loop
for epoch = 1:maxEpochs
reset(mbqTrain)
shuffle(mbqTrain)
while hasdata(mbqTrain)
% Get next batch from minibatchqueue
[X,gtBox,gtClass,gtMask] = next(mbqTrain);
% Evaluate the model gradients and loss using dlfeval
[gradients,loss,state,learnables] = dlfeval(@networkGradients,X,gtBox,gtClass,gtMask,net,params);
%dlnet.State = state;
% Compute the learning rate for the current iteration
learnRate = initialLearnRate/(1 + decay*(epoch-1));
if(~isempty(gradients) && ~isempty(loss))
[net.AllLearnables,velocity] = sgdmupdate(learnables,gradients,velocity,learnRate,momentum);
else
continue;
end
% Plot loss/accuracy metric every 10 iterations
if(mod(iteration,10)==0)
helper.displayVerboseOutputEveryEpoch(start,learnRate,epoch,iteration,loss);
D = duration(0,0,toc(start),'Format','hh:mm:ss');
addpoints(learningratePlotter,iteration,learnRate)
addpoints(lossPlotter,iteration,double(gather(extractdata(loss))))
subplot(2,1,2)
title(strcat("Epoch: ",num2str(epoch),", Elapsed: "+string(D)))
drawnow
end
iteration = iteration + 1;
end
end
% Save the trained network
modelDateTime = string(datetime('now','Format',"yyyy-MM-dd-HH-mm-ss"));
save(strcat("trainedMaskRCNN-",modelDateTime,"-Epoch-",num2str(epoch),".mat"),'net');
end
Then error:
Error using nnet.internal.cnngpu.convolveBackwardDataND
Out of memory on device. To view more detail about available memory on the GPU, use 'gpuDevice()'. If the problem persists, reset the GPU by calling 'gpuDevice(1)'.
Error in gpuArray/internal_dlconvBackward (line 75)
dX = nnet.internal.cnngpu.convolveBackwardDataND( ...
Error in deep.internal.recording.operations.DlconvOp/backward (line 45)
[dX,dweights,dbias] = internal_dlconvBackward(dZ,X,weights,op.Args{:},inputActivityFlags);
Error in deep.internal.recording.RecordingArray/backwardPass (line 89)
grad = backwardTape(tm,{y},{initialAdjoint},x,retainData,false,0);
Error in dlarray/dlgradient (line 132)
[grad,isTracedGrad] = backwardPass(y,xc,pvpairs{:});
Error in networkGradients (line 150)
gradients = dlgradient(totalLoss, learnables);
Error in deep.internal.dlfeval (line 17)
[varargout{1:nargout}] = fun(x{:});
Error in dlfeval (line 40)
[varargout{1:nargout}] = deep.internal.dlfeval(fun,varargin{:});
Error in MaskRCNNDeepLearningExample (line 259)
[gradients,loss,state,learnables] = dlfeval(@networkGradients,X,gtBox,gtClass,gtMask,net,params);
My gpu:
CUDADevice with properties:
Name: 'NVIDIA GeForce RTX 3080'
Index: 1
ComputeCapability: '8.6'
SupportsDouble: 1
DriverVersion: 11.4000
ToolkitVersion: 11
MaxThreadsPerBlock: 1024
MaxShmemPerBlock: 49152
MaxThreadBlockSize: [1024 1024 64]
MaxGridSize: [2.1475e+09 65535 65535]
SIMDWidth: 32
TotalMemory: 1.0737e+10
AvailableMemory: 4.1705e+09
MultiprocessorCount: 68
ClockRateKHz: 1785000
ComputeMode: 'Default'
GPUOverlapsTransfers: 1
KernelExecutionTimeout: 1
CanMapHostMemory: 1
DeviceSupported: 1
DeviceAvailable: 1
DeviceSelected: 1
It should not be the memory issue since I set the miniBatchSize = 1. And I set "OutputEnvironment"=["cpu","cpu","cpu","cpu"], it can work, but very slow.
Thanks very much for suggestion to fix the problem.

4 comentarios

Joss Knight
Joss Knight el 16 de Feb. de 2022
Editada: Joss Knight el 16 de Feb. de 2022
Can you reformat your code using a MATLAB code block so we can read it more easily?
maskRCNN can be very memory intensive, especially in older versions of MATLAB. If you don't have the latest version do try to upgrade. Also, 640x480 is a very large input size for an object detection problem, you should downsample your input.
It seems like you have 10GB of memory, but 6GB of it is already reserved. That's unusual. Can you see when that happens? You could step through your code line by line in the debugger and monitor GPU memory in the Task Manager or on the command line, or insert something in the code like
gpu = gpuDevice;
...
disp("Current GPU memory = "+gpu.AvailableMemory+" bytes");
Thanks Joss.
I have reformatted the code, hope you can read it easily.
I use the latest version of Matlab (R2021b). Since I use the images downloaded from coco as introduced by the example "Instance Segmentation Using Mask R-CNN Deep Learning"(https://www.mathworks.com/help/deeplearning/ug/instance-segmentation-using-mask-rcnn.html), it should be already tested by other people. I think the image size of 640*480 should not be the issue.
It seems that Mask R-CNN can not use my GPU. But I tried other deep learning model like Semantic Segmentation Using Deep Learning (https://www.mathworks.com/help/vision/ug/semantic-segmentation-using-deep-learning.html), the GPU can work well. It means the driver of GPU is ok. Maybe RTX 3080 is not compatible with Mask R-CNN?
Thanks again.
I'm afraid I don't see any change in the formatting of the opening post. You need to edit your post, select all of your code, and hit the code icon .
I don't believe there's any known problem with the RTX 3080, and it has plenty of memory. Have you worked out where the missing 6GB are? It seems you need more than 4GB but 4GB is all you have at the moment.
Hello Joss,
Thanks a lot. This time I am sure I have formatted the code.
I run the following code first:
reset(gpuDevice)
Then, I found that when the following code run (the main script), the GPU memory increased a lot (cost about 5 GB):
[masks,labels,scores,boxes] = segmentObjects(net,imTest);
continue running to the following code (deep.internal.dlfeval), the cost of GPU memory increased to 6.5 GB:
[varargout{1:nargout}] = fun(x{:});
continue to the following code (networkGradients.m), the cost of GPU memory increased to 8.9 GB:
[netOut, state] = forward(dlnet, X);
continue running to the end, the cost of GPU memory will increase to 9.4 GB, and then error as showed above.

Iniciar sesión para comentar.

Respuestas (1)

yanqi liu
yanqi liu el 16 de Feb. de 2022
yes,sir,may be set MiniBatchSize=1 or reduce image data size

1 comentario

Thanks, I have tried miniBatchSize=1, it can not work. And the images downloaded from COCO is about 640*480, it should not be the problem caused the GPU issue.

Iniciar sesión para comentar.

Preguntada:

el 16 de Feb. de 2022

Comentada:

el 16 de Feb. de 2022

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by