Use Deep Network Designer to Setup an Autoencoder
27 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
J Schreiber
el 8 de Nov. de 2024 a las 0:55
Respondida: Gayathri
el 8 de Nov. de 2024 a las 4:36
I'm trying out MATLAB's deep network designer and having trouble setting up a simple autoencoder for MNIST images. Can anyone provide an example of how to read in MNIST images and feed them into a simple autoencoder so that their label's are just the images themselves? I just want a simple MSE reconstruction, and the ability to compare images with their reconstruction.
This is what I have tried:
unzip("DigitsData.zip")
%data from matlab example -->
%CreateImageClassificationNetworkUsingDeepNetworkDesignerExample
imds = imageDatastore("DigitsData", ...
IncludeSubfolders=true, ...
LabelSource="foldernames");
imds.ReadFcn = @(x) imresize(imread(x), [28 28]);
autoencoderData = transform(imds, @(data) ({data, data}));
options = trainingOptions('adam', ...
'Plots', 'training-progress', ...
'Metrics', 'rmse', ...
'MiniBatchSize', 200,...
'MaxEpochs',4,...
'TargetDataFormats', 'SSCB');
net = trainnet(autoencoderData,net_1,"mse",options);
Where net_1 is desinged using the designer with ImageInput of 28x28x1, followed by a fully connected (output 64), relu, fully connected (784), relu, then sigmoid and finally a resize2dLayer (for output size) set to 28x28.
I get the error: Error using trainnet - Size of predictions and targets must match.
I'm not sure if the error is with the layer sizes or the input autoencoderData (my attempt at making image labels be the images themselves). Any help would be appreciated.
0 comentarios
Respuesta aceptada
Gayathri
el 8 de Nov. de 2024 a las 4:36
Autoencoder consists of an encoder, bottleneck layer and a decoder which helps maintain the output size same as that of the input. So, we need to make sure that “net_1” is defined in this format so that sizes of “predictions” and “targets” match.
Please refer to the below code for a simple autoencoder network.
layers = [
imageInputLayer([28 28 1], 'Name', 'input', 'Normalization', 'none')
% Encoder
convolution2dLayer(3, 32, 'Padding', 'same', 'Stride', 1, 'Name', 'conv1')
reluLayer('Name', 'relu1')
maxPooling2dLayer(2, 'Stride', 2, 'Name', 'maxpool1')
convolution2dLayer(3, 64, 'Padding', 'same', 'Stride', 1, 'Name', 'conv2')
reluLayer('Name', 'relu2')
maxPooling2dLayer(2, 'Stride', 2, 'Name', 'maxpool2')
% Bottleneck
convolution2dLayer(3, 128, 'Padding', 'same', 'Stride', 1, 'Name', 'bottleneck')
reluLayer('Name', 'relu_bottleneck')
% Decoder
transposedConv2dLayer(3, 64, 'Stride', 2, 'Cropping', 'same', 'Name', 'upconv1')
reluLayer('Name', 'relu3')
transposedConv2dLayer(3, 32, 'Stride', 2, 'Cropping', 'same', 'Name', 'upconv2')
reluLayer('Name', 'relu4')
transposedConv2dLayer(3, 1, 'Stride', 1, 'Cropping', 'same', 'Name', 'upconv3')
sigmoidLayer('Name', 'sigmoid_output') % To ensure output is within [0, 1]
];
% Create the layer graph
net_1 = dlnetwork(layers);
With this change, I was able to train the network. Please see the below screenshot, which shows the progress of training.
For more information on autoencoders, please refer to the below link.
Hope you find this information helpful.
0 comentarios
Más respuestas (0)
Ver también
Categorías
Más información sobre Image Data Workflows en Help Center y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!