how to Train Network on Image and Feature Data with more then one feature input?

12 visualizaciones (últimos 30 días)
In this example: openExample('nnet/TrainNetworkOnImageAndFeatureDataExample')
I want to change numFeatures fro 1 to 3. I have added a 3 element vector to X2Train
>> preview(dsTrain)
ans =
1×3 cell array
{28×28 double} {[-42 0.9891 0.5122]} {[3]}
layers = [
imageInputLayer(imageInputSize,'Normalization','none','Name','images')
convolution2dLayer(filterSize,numFilters,'Name','conv')
reluLayer('Name','relu')
fullyConnectedLayer(50,'Name','fc1')
concatenationLayer(1,3,'Name','concat')
fullyConnectedLayer(numClasses,'Name','fc2')
softmaxLayer('Name','softmax')];
lgraph = layerGraph(layers);
featInput = featureInputLayer(numFeatures,Name="features");
lgraph = addLayers(lgraph,featInput);
lgraph = connectLayers(lgraph,"features","cat/in2");
lgraph = connectLayers(lgraph,"features","cat/in3");
figure
plot(lgraph)
when I run it I keep getting this error:
Error using trainNetwork
Input datastore returned more than one observation per row for network input 2.
Any help would be appreciated!

Respuesta aceptada

Ben
Ben el 20 de Jul. de 2022
The subtle issue here is that the feature data needs to read out of the datastore as a NumFeatures x 1 vector as documented here: https://www.mathworks.com/help/deeplearning/ug/datastores-for-deep-learning.html
So you'll need to transpose your feature data either before it goes into the datastore, or as a transform of your existing datastore (e.g. transformedDsTrain = transform(dsTrain,@(x) [x(1),{x{2}.'},x(3)]);).
However you'll next run into another subtle issue at the concatenationLayer since the output of layer 'fc2' will have size 1(S) x 1(S) x 50(C) x BatchSize(B). This needs squeezing so it can be concatenated with the feature data in shape 3(C) x BatchSize(B). Probably the easiest way to do that is with a functionLayer. Here's some code to get your network running:
imageInputSize = [28,28,1];
filterSize = 3;
numFilters = 8;
numClasses = 10;
numFeatures = 3;
layers = [
imageInputLayer(imageInputSize,'Normalization','none','Name','images')
convolution2dLayer(filterSize,numFilters,'Name','conv')
reluLayer('Name','relu')
fullyConnectedLayer(50,'Name','fc1')
squeezeLayer()
concatenationLayer(1,3,'Name','cat')
fullyConnectedLayer(numClasses,'Name','fc2')
softmaxLayer('Name','softmax')
classificationLayer];
lgraph = layerGraph(layers);
featInput = featureInputLayer(numFeatures,Name="features");
lgraph = addLayers(lgraph,featInput);
lgraph = connectLayers(lgraph,"features","cat/in2");
lgraph = connectLayers(lgraph,"features","cat/in3");
numObservations = 100;
fakeImages = randn([imageInputSize,numObservations]);
imagesDS = arrayDatastore(fakeImages,IterationDimension=4);
fakeFeatures = randn([numObservations,numFeatures]);
featureDS = arrayDatastore(fakeFeatures.',IterationDimension=2);
fakeTargets = categorical(mod(1:numObservations,numClasses));
targetDS = arrayDatastore(fakeTargets,IterationDimension=2);
ds = combine(imagesDS,featureDS,targetDS);
opts = trainingOptions("adam","MaxEpochs",1);
trainNetwork(ds,lgraph,opts);
function layer = squeezeLayer(args)
arguments
args.Name='';
end
layer = functionLayer(@squeezeLayerFcn,"Name",args.Name,"Formattable",true);
end
function x = squeezeLayerFcn(x)
x = squeeze(x);
% Since squeeze will squeeze out some dimensions, we need to relabel x.
% Assumption: x does not have a 'T' dimension.
n = ndims(x);
newdims = [repelem('S',n-2),'CB'];
x = dlarray(x,newdims);
end
As a final note - I notice you're concatenating the feature input layer to itself, alongside the outputs of layer 'fc2'. Maybe that's intentional, it seemed slightly curious to me.
  4 comentarios
Ben
Ben el 13 de Oct. de 2023
The error is suggestive that the issue is with the datastore setup - trainNetwork thinks that your responses/targets have size 1569, but that's actually the batch/observation dimension.
You can find documentation on datastore inputs for trainNetwork here: https://www.mathworks.com/help/deeplearning/ug/datastores-for-deep-learning.html
If you could call:
data = read(All_TrainDs)
and post information about data, we might be able to debug - in particular we want to check the size of each of the inputs/predictors and the output/response in data.
Kenneth
Kenneth el 17 de Oct. de 2023
Ok.
Thanks so much Ben for the time and prompt response.

Iniciar sesión para comentar.

Más respuestas (0)

Categorías

Más información sobre Image Data Workflows en Help Center y File Exchange.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by