Optimizing the GRU training process using Bayesian shows errors

Hi all, I'm having a problem with optimizing GRU parameters using Bayesian optimization, the code doesn't report an error, but some iterations of the Bayesian optimization process show ERROR. What should I do about it? Can you help me out, I would greatly appreciate it if you could help me out.

 Respuesta aceptada

Alan Weiss
Alan Weiss el 16 de Nov. de 2023
The error is coming from your code. Apparently, some points visited (that have, for example, NumOfUnits = 30, InitialLearnRate = 0.8 or 0.2, L2Regularization = 0.0048 or 7.5e-6) give NaN results to your objective function or nonlinear constraint functions.
You can test this outside of bayesopt to see where your code returns NaN.
If your code is running as expected, then there is nothing wrong with ignoring the iterations that lead to errors.
Alan Weiss
MATLAB mathematical toolbox documentation

5 comentarios

Thank you very much for your help!But how to test this outside of bayesopt?
Well, it depends how you do the Bayesian optimization. I suppose that you are using a fit function with the OptimizeHyperparameters argument, but I don't know offhand which fit function you are using. In any case, you can usually specify which parameter values the fit function should use instead of having bayesopt vary those parameters.
For more help, I'd need more detailed information, such as you exact function call and possibly some of your data.
Alan Weiss
MATLAB mathematical toolbox documentation
Yuanru Zou
Yuanru Zou el 17 de Nov. de 2023
Editada: Yuanru Zou el 17 de Nov. de 2023
Hi, I have used Bayesian optimization of GRU's hyperparameters: number of neurons in the hidden layer, InitialLearnRate and L2Regularization. In which I have written the objective function as follows:
function valError = BOFunction(optVars)
inputn_train = evalin('base', 'inputn_train');
outputn_train = evalin('base', 'outputn_train');
inputSize = size(inputn_train,1);
outputSize = size(outputn_train,1);
opt.gru = [ ...
sequenceInputLayer(inputSize)
gruLayer(optVars.NumOfUnits,'outputmode','sequence','name','hidden')
fullyConnectedLayer(outputSize)
regressionLayer('name','out')];
opt.opts = trainingOptions('adam', ...
'MaxEpochs',50, ...
'GradientThreshold',1,...
'ExecutionEnvironment','cpu',...
'InitialLearnRate',optVars.InitialLearnRate, ...
'L2Regularization', optVars.L2Regularization, ...
'LearnRateSchedule','piecewise', ...
'LearnRateDropPeriod',40, ...
'LearnRateDropFactor',0.2, ...
'Verbose',0, ...
'Plots','none'...
);
net = trainNetwork(inputn_train, outputn_train, opt.gru, opt.opts);
t_sim1 = predict(net, inputn_train);
error = t_sim1 - outputn_train;
valError = sqrt(mean((error).^2));
end
This is the code that calls Bayesian optimization of GRU in my main program:
ObjFcn = @BOFunction;
optimVars = [
optimizableVariable('NumOfUnits', [2, 50], 'Type', 'integer')
optimizableVariable('InitialLearnRate', [1e-3, 1], 'Transform', 'log')
optimizableVariable('L2Regularization', [1e-10, 1e-2], 'Transform', 'log')
];
BayesObject = bayesopt(ObjFcn, optimVars, ...
'MaxTime', Inf, ...
'IsObjectiveDeterministic', false, ...
'MaxObjectiveEvaluations', 30, ...
'Verbose', 1, ...
'UseParallel', false);
NumOfUnits = BayesObject.XAtMinEstimatedObjective.NumOfUnits;
InitialLearnRate = BayesObject.XAtMinEstimatedObjective.InitialLearnRate;
L2Regularization = BayesObject.XAtMinEstimatedObjective.L2Regularization;
inputSize = size(inputn_train,1);
outputSize = size(outputn_train,1);
numhidden_units = NumOfUnits;
gru = [ ...
sequenceInputLayer(inputSize)
gruLayer(numhidden_units,'outputmode','sequence','name','hidden')
fullyConnectedLayer(outputSize)
regressionLayer('name','out')];
opts = trainingOptions('adam', ...
'MaxEpochs',200, ...
'GradientThreshold',1,...
'ExecutionEnvironment','cpu',...
'InitialLearnRate',InitialLearnRate, ...
'L2Regularization', L2Regularization, ...
'LearnRateSchedule','piecewise', ...
'Verbose',true, ...
'Plots','training-progress'...
);
GRUnet = trainNetwork(inputn_train,outputn_train,gru,opts);
I'm sorry, but I don't know much about deep learning, so I don't think that I can help you with your code. It looks like you are training a neural network and optimizing it to get a minimal mean squared error. I don't see anything obviously wrong, but then again I don't know what would cause the network training process or something else to throw an error. Usually in these systems, there is so much random going on (from the stochastic gradient descent to the data collection process) that things can get noisy or fail for a variety of reasons. In your case, I really don't know.
Sorry.
Alan Weiss
MATLAB mathematical toolbox documentation
Okay, thanks for your patience and help, I hope you're doing well at work and in good health!

Iniciar sesión para comentar.

Más respuestas (0)

Categorías

Más información sobre Sequence and Numeric Feature Data Workflows en Centro de ayuda y File Exchange.

Etiquetas

Preguntada:

el 16 de Nov. de 2023

Comentada:

el 22 de Nov. de 2023

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by