Borrar filtros
Borrar filtros

Is it correct that the Regression Learner Toolbox uses the best parameters for each model?

3 visualizaciones (últimos 30 días)
Is it correct that the Regression Learner Toolbox uses the best parameters for each model? For example, in the case of trees, the accuracy depends on how many branches there are. Do regression learners use these parameters most accurately? Do svm, nnr, linear regression, etc. also use parameters with optimal accuracy?

Respuesta aceptada

Yatharth
Yatharth el 5 de Sept. de 2023
Hi,
I understand that you want to know whether the result obtained by the parameters selected by the toolbox are the best results or there is scope of improvement via manual tuning of parameters. However as you mentioned "accuracy" and "Optimal Accuracy" that depends on your data and your expectations from the model.
Let's assume you are using a Linear Regression Model:
Then the model contains an intercept and linear terms for each predictor. A least-square fit is used to determine the model parameters - which are the intercept and the coefficient of the linear term (the predicted response is compared to the true response and then an error is calculated -> algorithm tries to minimize the overall error)
So for the chosen set of parameters, the results are "optimal".
But you can of course tune your model by changing certain hyperparameters which might lead to better predictions. There is always the tradeoff between performance and accuracy. You have to distinguish between accuracy looking at the training data and validation/test data (keyword "overfitting").
I hope this helps.
  1 comentario
Ive J
Ive J el 5 de Sept. de 2023
Editada: Ive J el 5 de Sept. de 2023
This doesn't address what OP was asking. More: this is rather a comparison between apples and oranges. In case of a generalized linear model, we estimate true parameters of a model (a parameterized model). This is not the case for other methods in the Regression Learner Toolbox: NNs, GPR, trees, etc. These models have hyperparameters to be optimized in a proper manner (nested CV for instance on the training set). This is not however the case for linear regression. Of course, you can be talking about penalized regression models (LASSO, ridge or elastic nets) which do contain hyperparameters.

Iniciar sesión para comentar.

Más respuestas (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by