
How to solve this error cause our origin pic is 512*512 but my GPU is not good enough so training is too slow and i wanna use 256*256 to let it train faster!
3 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
imds1 = imageDatastore('E:\Picture\PDRR');
imds2 = imageDatastore ('E:\Picture\DRR');
augmenter = imageDataAugmenter( ...
'RandRotation',@()randi([0,1],1)*90, ...
'RandXReflection',true);
miniBatchSize = 4;
patchSize = [256 256];
patchds = randomPatchExtractionDatastore(imds1,imds2,patchSize, ...
'PatchesPerImage',8, ...
'DataAugmentation',augmenter);
patchds.MiniBatchSize = miniBatchSize;
lgraph=layerGraph();
tempLayers=[
imageInputLayer([256 256 1],"Name","ImageInputLayer","Normalization","rescale-zero-one")
convolution2dLayer([3 3],32,"Name","Encoder-Stage-1-Conv-1","Padding","same","WeightsInitializer","he")
reluLayer("Name","Encoder-Stage-1-ReLU-1")
convolution2dLayer([3 3],32,"Name","Encoder-Stage-1-Conv-2","Padding","same","WeightsInitializer","he")
reluLayer("Name","Encoder-Stage-1-ReLU-2")];
lgraph=addLayers(lgraph,tempLayers);
tempLayers=[
maxPooling2dLayer([2 2],"Name","Encoder-Stage-1-MaxPool","Stride",2 )
convolution2dLayer([3 3],64,"Name","Encoder-Stage-2-Conv-1","Padding","same","WeightsInitializer","he")
reluLayer("Name","Encoder-Stage-2-ReLU-1")
convolution2dLayer([3 3],64,"Name","Encoder-Stage-2-Conv-2","Padding","same","WeightsInitializer","he")
reluLayer("Name","Encoder-Stage-2-ReLU-2")];
lgraph=addLayers(lgraph,tempLayers);
tempLayers=[
maxPooling2dLayer([2 2],"Name","Encoder-Stage-2-MaxPool","Stride",2)
convolution2dLayer([3 3],128,"Name","Encoder-Stage-3-Conv-1","Padding","same","WeightsInitializer","he")
reluLayer("Name","Encoder-Stage-3-ReLU-1")
convolution2dLayer([3 3],128,"Name","Encoder-Stage-3-Conv-2","Padding","same","WeightsInitializer","he")
reluLayer("Name","Encoder-Stage-3-ReLU-2")];
lgraph=addLayers(lgraph,tempLayers);
tempLayers=[
dropoutLayer(0.5,"Name","Encoder-Stage-3-DropOut")
maxPooling2dLayer([2 2],"Name","Encoder-Stage-3-MaxPool","Stride",2)
convolution2dLayer([3 3],256,"Name","Bridge-Conv-1","Padding","same","WeightsInitializer","he")
reluLayer("Name","Bridge-ReLU-1")
convolution2dLayer([3 3],256,"Name","Bridge-Conv-2","Padding","same","WeightsInitializer","he")
reluLayer("Name","Bridge-ReLU-2")
dropoutLayer(0.5,"Name","Bridge-DropOut")
transposedConv2dLayer([2 2],128,"Name","Decoder-Stage-1-UpConv","Stride",2)
reluLayer("Name","Decoder-Stage-1-UpReLU")];
lgraph=addLayers(lgraph,tempLayers);
tempLayers=[
depthConcatenationLayer(2,"Name","Decoder-Stage-1-DepthConcatenation")
convolution2dLayer([3 3],128,"Name","Decoder-Stage-1-Conv-1","Padding","same","WeightsInitializer","he")
reluLayer("Name","Decoder-Stage-1-ReLU-1")
convolution2dLayer([3 3],128,"Name","Decoder-Stage-1-Conv-2","Padding","same","WeightsInitializer","he")
reluLayer("Name","Decoder-Stage-1-ReLU-2")
transposedConv2dLayer([2 2],64 ,"Name","Decoder-Stage-2-UpConv","Stride",2)
reluLayer("Name","Decoder-Stage-2-UpReLU")];
lgraph=addLayers(lgraph,tempLayers);
tempLayers=[
depthConcatenationLayer(2,"Name","Decoder-Stage-2-DepthConcatenation")
convolution2dLayer([3 3],64,"Name","Decoder-Stage-2-Conv-1","Padding","same","WeightsInitializer","he")
reluLayer("Name","Decoder-Stage-2-ReLU-1")
convolution2dLayer([3 3],64,"Name","Decoder-Stage-2-Conv-2","Padding","same","WeightsInitializer","he")
reluLayer("Name","Decoder-Stage-2-ReLU-2")
transposedConv2dLayer([2 2],32,"Name","Decoder-Stage-3-UpConv","Stride",2)
reluLayer("Name","Decoder-Stage-3-UpReLU")];
lgraph=addLayers(lgraph,tempLayers);
tempLayers=[
depthConcatenationLayer(2,"Name","Decoder-Stage-3-DepthConcatenation")
convolution2dLayer([3 3],32,"Name","Decoder-Stage-3-Conv-1","Padding","same","WeightsInitializer","he")
reluLayer("Name","Decoder-Stage-3-ReLU-1")
convolution2dLayer([3 3],32,"Name","Decoder-Stage-3-Conv-2","Padding","same","WeightsInitializer","he")
reluLayer("Name","Decoder-Stage-3-ReLU-2")
convolution2dLayer([2 2],1,"Name","Final-ConvolutionLayer","Padding","same","WeightsInitializer","he")
regressionLayer('Name','output')
];
lgraph=addLayers(lgraph,tempLayers);
clear tempLayers;
lgraph=connectLayers(lgraph,"Encoder-Stage-1-ReLU-2","Encoder-Stage-1-MaxPool");
lgraph=connectLayers(lgraph,"Encoder-Stage-1-ReLU-2","Decoder-Stage-3-DepthConcatenation/in2");
lgraph=connectLayers(lgraph,"Encoder-Stage-2-ReLU-2","Encoder-Stage-2-MaxPool");
lgraph=connectLayers(lgraph,"Encoder-Stage-2-ReLU-2","Decoder-Stage-2-DepthConcatenation/in2");
lgraph=connectLayers(lgraph,"Encoder-Stage-3-ReLU-2","Encoder-Stage-3-DropOut");
lgraph=connectLayers(lgraph,"Encoder-Stage-3-ReLU-2","Decoder-Stage-1-DepthConcatenation/in2");
lgraph=connectLayers(lgraph,"Decoder-Stage-1-UpReLU","Decoder-Stage-1-DepthConcatenation/in1");
lgraph=connectLayers(lgraph,"Decoder-Stage-2-UpReLU","Decoder-Stage-2-DepthConcatenation/in1");
lgraph=connectLayers(lgraph,"Decoder-Stage-3-UpReLU","Decoder-Stage-3-DepthConcatenation/in1");
figure
plot(lgraph)
maxEpochs =1;
epochIntervals = 1;
initLearningRate = 0.01;
learningRateFactor = 0.0001;
l2reg = 0.0001;
options = trainingOptions('sgdm', ...
'Momentum',0.9, ...
'InitialLearnRate',initLearningRate, ...
'LearnRateSchedule','piecewise', ...
'LearnRateDropPeriod',10, ...
'LearnRateDropFactor',learningRateFactor, ...
'L2Regularization',l2reg, ...
'MaxEpochs',maxEpochs ,...
'MiniBatchSize',miniBatchSize, ...
'GradientThresholdMethod','l2norm', ...
'Plots','training-progress', ...
'GradientThreshold',0.01);
modelDateTime = datestr(now,'dd-mmm-yyyy-HH-MM-SS');
net = trainNetwork(patchds,lgraph,options);
save(['trainedNet-' modelDateTime '-Epoch-' num2str(maxEpochs*epochIntervals) ...
'ScaleFactors-' num2str(234) '.mat'],'net','options');
pic = imread('E:\Picture\PDRR\00001.jpg');
out2 = predict(net,pic);
subplot(1,2,1)
imshow(pic)
subplot(1,2,2)
imshow(out2,[])
2 comentarios
Seth Furman
el 6 de Jun. de 2022
Editada: Seth Furman
el 6 de Jun. de 2022
I should mention that datestr is discouraged. Prefer datetime where possible.

For example,
dt = datetime("now","Format","dd-MMM-yyyy-HH-mm-ss")
string(dt)
Joss Knight
el 11 de Jun. de 2022
Is randomPatchExtractionDatastore what you actually want, or is it just your attempt to reduce the input size?
Respuestas (1)
Dinesh
el 2 de Mzo. de 2023
Hi FangMing!
As per my understanding you want to resize the picture from 512*512 to 256 * 256 to increase the training speed.
You can use the function ‘imresize’ to resize the image, set the ‘scale’ parameter to 0.5.
You can refer the following MATLAB documentation for more details:
- https://www.mathworks.com/help/matlab/ref/imresize.html
- https://www.mathworks.com/help/deeplearning/ug/optimize-neural-network-training-speed-and-memory.html
Hope this helps !
0 comentarios
Ver también
Categorías
Más información sobre Deep Learning Toolbox en Help Center y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!