Can someone tell me about the algorithm used to obtain the prediction of the bagged models in Ensemble bagged tree classification in Matlab?
3 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
Gunjan Rateria
el 2 de Jul. de 2020
Comentada: Gunjan Rateria
el 2 de Jul. de 2020
Hi, I have been trying to understand about Ensemble Bagged tree classification. I know that it combines a set of trained weak learner models and can predict ensemble response for new data by aggregating predictions from its weak learners. Can you tell me what these weak learners are? What algorithm are they based on to arrive at a prediction? Is it gradient descent , Adaboost or something else? Also, are all the models same and only the data varies between different bags or the models are also based on different algorithms?
I really appreciate your help and time.
0 comentarios
Respuesta aceptada
bharath pro
el 2 de Jul. de 2020
In general ensemble classifiers combine a different variety of weak classifiers to reduce high variance. So in general the weak leraners could be any classification models, from logistic regression to nueral nets and could be using any optimization techniques to reduce loss. But in an Ensemble Bagged tree ( I think you are talking about this https://www.mathworks.com/help/stats/treebagger-class.html ) classifier the models would be weak decision trees in which a bootstrap data is drawn for each tree. Decision trees ususally use gini index or entrophy to decide their next branch. Adaboost can also be applied to any ensemble classifier. So in an Ensemble Bagged tree classification all models would be decision trees with varying data due to bagging.
Más respuestas (0)
Ver también
Categorías
Más información sobre Classification Ensembles en Help Center y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!