training error in k-fold method
2 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
Hi
I using Regression learner app in matlab and I want to use the k-fold method for validation. I set aside 15% of the data for the test (I randomly selected them), and for the remaining 85% of the data, I used 5-fold validation. The regression app learner gives me the Validation error, and when I enter those test data also it gives me a test error, but it doesn't have any menu or option for training error. I want to know how I can calculate training errors. is it ok that I predict those 85% data and calculate rMetrics and report errors as training errors?
1 comentario
Image Analyst
el 8 de Oct. de 2022
Editada: Image Analyst
el 8 de Oct. de 2022
Can you attach your data so we can try the Regression Learner ourselves? Which model(s) did you try your data on?
If you have any more questions, then attach your data and code to read it in with the paperclip icon after you read this:
Respuestas (1)
Drew
el 19 de En. de 2023
Editada: Drew
el 19 de En. de 2023
The Regression Learner app does not show the error metrics on the training data using the final model. The answer at https://www.mathworks.com/matlabcentral/answers/1881227-question-on-regression-learner-app includes an example of how to get error metrics on the training data using the final model. In short, you can export the final model, then run prediction on the training data using the final model, then calculate the desired error metric. As an example of the calculation, if the final model is exported as trainedModel, and the training data is available in tbl_training, and the response is in tbl_training.Y, then this code calculates RMSE on the training data using the final model:
% Do prediction on the training set, using the final model
Y_training = trainedModel.predictFcn(tbl_training);
% Calculate RMSE on training set using final model
rmse_on_training_data = sqrt(mean((Y_training-tbl_training.Y).^2))
In general, the error metrics (such as RMSE) on the training data using the final model will be lower than error metrics on the validation data using the k-fold validation models, because testing the final model on the training data is "cheating" because the model training has seen the data being predicted. That is, when testing the final model on the training data, the same data is being used for training and testing, and thus the error rate on the training data is not a good estimate of the error rate one can expect to see on future new test data which was not seen during model training.
0 comentarios
Ver también
Categorías
Más información sobre Gaussian Process Regression en Help Center y File Exchange.
Productos
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!