Normalizing input data for DeepLearning Trainer interface takes very long time

24 visualizaciones (últimos 30 días)
Hey guys,
trying to training a network on R2022a with the help of network trainer, on which a customed datastore is set as training input. The ds has ~50000 observations(~0.6MB each) with some simple augmentation methods integrated.
The problem is, it takes a very long time(30min or so) stucking in "normalizing input data" stage every time after I start training progress. Why is that? How to improve it?
  4 comentarios
KSSV
KSSV el 25 de Jul. de 2022
Whats the input size and target? What is the problem about?
HW X
HW X el 25 de Jul. de 2022
@KSSV oh sure. Here’s some details. The input size is 224by224by2 double, response 1by1by2 double(a normalized plane coordinate, to say) . The network has ~45layers with 2M params. Data augmentation and shuffle is implemented within my customized minibatchable data store.
I’ve examined my ds according to doc and it seems to work well, but somehow it takes a long time to prepare data before training

Iniciar sesión para comentar.

Respuestas (2)

HW X
HW X el 26 de Jul. de 2022
Trainer>Trainer.doComputeStatistics
Ran code profiler and: this function up here takes up most of training initialization time. An entire epoch of data is pre-loaded from ROM everytime before I start a training session.
Now the problem is, does anyone know how to make it a faster process to deal with this function?
  1 comentario
David
David el 6 de Dic. de 2022
You'd think a lot of this could be pre-calculated and saved if you're not changing data sets constantly.

Iniciar sesión para comentar.


Richard
Richard el 7 de Dic. de 2022
You can set the ResetInputNormalization training option to false to prevent the input statistics being recomputed every time. You will need to provide an input layer which has the appropriate values set - if you have an output network from an earlier training attempt then you can extract its input layer and put it in place of the input layer in the layer graph you are training.
This is how to do it for the code example that is in the trainNetwork help:
[XTrain, YTrain] = digitTrain4DArrayData;
layers = [
imageInputLayer([28 28 1], 'Name', 'input')
convolution2dLayer(5, 20, 'Name', 'conv_1')
reluLayer('Name', 'relu_1')
convolution2dLayer(3, 20, 'Padding', 1, 'Name', 'conv_2')
reluLayer('Name', 'relu_2')
convolution2dLayer(3, 20, 'Padding', 1, 'Name', 'conv_3')
reluLayer('Name', 'relu_3')
additionLayer(2,'Name', 'add')
fullyConnectedLayer(10, 'Name', 'fc')
softmaxLayer('Name', 'softmax')
classificationLayer('Name', 'classoutput')];
lgraph = layerGraph(layers);
lgraph = connectLayers(lgraph, 'relu_1', 'add/in2');
% Perform a single epoch of training which will initialize the input layer
initOptions = trainingOptions('sgdm', 'Plots', 'training-progress', 'MaxEpochs', 1);
[net,info] = trainNetwork(XTrain, YTrain, lgraph, initOptions);
% Transfer the initialized input layer to the untrained layergraph
input = net.Layers(1);
lgraph = replaceLayer(lgraph, input.Name, input);
% Perform training without reinitializing the input layer
options = trainingOptions('sgdm', 'Plots', 'training-progress', 'ResetInputNormalization', false);
[net,info] = trainNetwork(XTrain, YTrain, lgraph, options);

Categorías

Más información sobre Image Data Workflows en Help Center y File Exchange.

Etiquetas

Productos


Versión

R2022a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by