Revisiting ensembling for improving the performance of deep learning models

Ensembling is a very well-known strategy consisting in fusing several different models to achieve a new model for classification or regression tasks. Over the years, ensembling has been proven to provide superior performance in various contexts related to pattern recognition and artificial intelligence. Moreover, the basic ideas that are at the basis of ensembling have been a source of inspiration for the design of the most recent deep learning architectures. Indeed, a close analysis of those architectures shows that some connections among layers and groups of layers achieve effects similar to those obtainable by bagging, boosting and stacking, which are the well-known three basic approaches to ensembling. However, we argue that research has not fully leveraged the potential offered by ensembling. Indeed, this paper investigates some possible approaches to the combination of weak learners, or sub-components of weak learners, in the context of bagging. Based on previous results obtained in specific domains, we extend the approach to a reference dataset obtaining encouraging results.

PDF
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here