simulink neural network producing different outputs to workspace

4 visualizaciones (últimos 30 días)
I have trained a network and when i test it with plotresponse I get the graph in plotresponse below, but when i create a simulink block of this network and test with the same input i get the graph in the scope.png file below. (yellow is target). I though it was a problem with normalisation, but now i don't know what could be causing it.
thanks in advance.

Respuesta aceptada

william edeg
william edeg el 12 de Feb. de 2020
If anyone has the same problem and finds this then pay attention to the number of data points you are using for training.
I was using "to workspace blocks" with sample times of 0.001 to collect my training data, but they didn't collect at anything near the proper times, or time intervals. (intervals of 0.001 for 200s should obviously produce 200000 data poitnts, but i was collecting someting like 66667).
I switched to using scope blocks to collect my data instead, and now I have the correct data, and my gensim network responds identically to the network it was gensimed from (when using identical inputs).

Más respuestas (3)

Nima SALIMI
Nima SALIMI el 25 de En. de 2020
I assume that when you are using the simulink block you are training a new network from the scratch. any time you train a network the results would be different due to the random initializations of the weights and bias values (and/or different splitting of the train and test datasets) though using the same datasets and hyperparameters. For this reason, a good pracrice is to train and test the model (either using simulink or toolbox functions) for a number of times to have a more convincing decision about performance of your model.
  3 comentarios
Nima SALIMI
Nima SALIMI el 3 de Feb. de 2020
My short answer to your question: nothing is wrong to get different results by one time using simulink and another time not-using simulink (even the same network and same dataset)!
My long answer: as I said in my previous answer, this is a normal behavior of any neural network that you will get different results any time training and testing the same network using the same datasets. Even if you use only command-line and not simulink block, using the same network and exact the same dataset you should get n different results training and testing the model for n different times (so its absolutely normal and nothing is wrong!). for further reading to know the reason of this behaviour: https://machinelearningmastery.com/reproducible-results-neural-networks-keras/
So what I suggest you is:
  1. to get exactly the same results each time training and testing the model use the rng() function (e.g. rng(2)) for the sake of reproducibly of the results and see that you can get the same results :)
  2. but as I said, when you want to decide to choose between several models (lets say 2 models), you should run both models for several times (+30, lets say 40 times) . In this way, you will have 40 accuracy values for each model/network. Then you should take the mean and std of those 40 values of two models to pick the better one (even a better way is to apply some statistical test of significance on those accuracy values for the model selection).
I hope I could anwer your question in more details in this comment.
Thanks for formally accepting my answer.
Best,
Nima
william edeg
william edeg el 3 de Feb. de 2020
Oh, I see. Thanks for your help.

Iniciar sesión para comentar.


Greg Heath
Greg Heath el 25 de En. de 2020
A simpler solution is to ALWAYS begin the program with a resetting of the random number generator. For example, choose your favorite NONNEGATIVE INTEGER as a seed and begin your program with
rng(seed)
Hope this helps.
Thank you for formally accepting my answer
Greg
  1 comentario
william edeg
william edeg el 26 de En. de 2020
Editada: william edeg el 26 de En. de 2020
Thanks for the response. I think I misunderstood what you meant for a moment. do you mean the random number generator for the initial network weights? the simulink network was created using the gensim function so i think it should be identical to the workspace network. the inputs are also identical, which is why i'm confused about getting different responses.

Iniciar sesión para comentar.


Nima SALIMI
Nima SALIMI el 25 de En. de 2020
From machine learning perspective its a better practice to train the model several times and compare the results accordingly (than fixing the random seed) as we are interested in making the effect of randomness as negligible as possible. The solution I proposed can also be found in the MATLAB documentation (https://au.mathworks.com/help/deeplearning/gs/classify-patterns-with-a-neural-network.html, 2nd last parag).
Any way, but if your time is so limited and you want to check the effect of some variables on the model performamce (depends on your problem in hand) then you can just fix the seed!
  3 comentarios
Nima SALIMI
Nima SALIMI el 26 de En. de 2020
Thanks Greg for the comment! I believe this is a classic machine learning concept that those experienced ML practitioners already understood my point in the answer.
But to make the principles more clear for the ML beginners (but can be extra ordinary good in coding!) I strongly recommend to read this short article: https://machinelearningmastery.com/reproducible-results-neural-networks-keras/
Seeding the random number generator is also one of the solutions to embrace the stochastic nature of neural networks (but just maybe not the best solution ?)
william edeg
william edeg el 26 de En. de 2020
Thanks again for your response. I think i might not have explained my problem well sorry. It seems like you and greg have read my problem as being a different reponse from different networks, but the simulink net was made using the gensim function, so it should be identical to the other network i'm comparing it to.
I've successfully trained networks on simpler narx functions and used gensim to create simulink networks that respond identically to their workspace versions, but for some reason it's not working for this more complex function.

Iniciar sesión para comentar.

Categorías

Más información sobre Sequence and Numeric Feature Data Workflows en Help Center y File Exchange.

Etiquetas

Productos


Versión

R2019b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by