How to input CNN images from two sources?
24 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
Andrey Puchkov
el 7 de Jul. de 2022
Good afternoon!
Can you please tell me how to submit images from two different sources to the CNN input at once?
I use two inputs (two imageInputLayer layers), which I then combine using the depthConcatenationLayer layer (see attached file). However, it is not possible to transfer data from two sources when training the network. I use the line:
net = trainNetwork([imdsTrain1, imdsTrain2],lgraph,options);
but it does not work, writes "wrong data format".
Or is the approach being used fundamentally wrong and another way should be applied?
% first image source
digitDatasetPath = uigetdir;
imds1 = imageDatastore(digitDatasetPath, ...
'IncludeSubfolders',true, ...
'LabelSource','foldernames');
% second image source
digitDatasetPath = uigetdir;
imds2 = imageDatastore(digitDatasetPath, ...
'IncludeSubfolders',true, ...
'LabelSource','foldernames');
[imdsTrain1,imdsValidation1] = splitEachLabel(imds1,numTrainFiles,'randomize');
[imdsTrain2,imdsValidation2] = splitEachLabel(imds2,numTrainFiles,'randomize');
lgraph = layerGraph();
tempLayers = imageInputLayer([90 90 3],"Name","imageinput_1");
lgraph = addLayers(lgraph,tempLayers);
tempLayers = imageInputLayer([90 90 3],"Name","imageinput_2");
lgraph = addLayers(lgraph,tempLayers);
tempLayers = [
depthConcatenationLayer(2,"Name","depthcat")
convolution2dLayer([3 3],32,"Name","conv_1","Padding","same")
batchNormalizationLayer("Name","batchnorm_1")
reluLayer("Name","relu_1")
maxPooling2dLayer([5 5],"Name","maxpool","Padding","same")
convolution2dLayer([3 3],32,"Name","conv_2","Padding","same")
batchNormalizationLayer("Name","batchnorm_2")
reluLayer("Name","relu_2")
fullyConnectedLayer(3,"Name","fc")
softmaxLayer("Name","softmax")
classificationLayer("Name","classoutput")];
lgraph = addLayers(lgraph,tempLayers);
clear tempLayers; % clean up helper variable
lgraph = connectLayers(lgraph,"imageinput_1","depthcat/in1");
lgraph = connectLayers(lgraph,"imageinput_2","depthcat/in2");
plot(lgraph);
options = trainingOptions('sgdm', ...
'InitialLearnRate',0.01, ...
'MaxEpochs',25, ...
'MiniBatchSize',128, ...
'Shuffle','every-epoch', ...
'ValidationFrequency',3, ...
'Verbose',false, ...
'Plots','training-progress')
net = trainNetwork([imdsTrain1, imdsTrain2],lgraph,options); % it throws an error that you can't merge like this
Thanks in advance!
0 comentarios
Respuesta aceptada
David Ho
el 7 de Jul. de 2022
Hello Andrey,
In this case you can use the "combine" function to create a combinedDatastore object of the form accepted by trainNetwork:
labelDsTrain = arrayDatastore(imdsTrain1.Labels); % Assuming the true labels are those of imdsTrain1
cdsTrain = combine(imdsTrain1, imdsTrain2, labelDsTrain);
In order to train a multi-input network, your data must be in the form of a datastore that outputs a cell array with (numInputs + 1) columns. In this case numInputs = 2, so the first two outputs are the images inputs to the network, and the final output is the label of the pair of images.
You can do a similar combination to make a validation datastore too. With a datastore of this form, you can train with
net = trainNetwork(cdsTrain, lgraph, options);
For more information on how to create datastores for deep learning, you can refer to this documentation:
You may also find this example of training a multi-input network for classification useful:
3 comentarios
MAHMOUD EID
el 11 de Abr. de 2023
How the labels can be combined if each image datastore have a diffrent labels?
Más respuestas (0)
Ver también
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!