On the Effectiveness of Deep Ensembles for Small Data Tasks

1 Jan 2021  ·  Lorenzo Brigato, Luca Iocchi ·

Deep neural networks represent the gold standard for image classification and many other tasks. However, they usually need large amounts of data to reach superior performance. In this work, we focus on classification problems with a few labeled examples per class and improve sample efficiency in the low data regime by using an ensemble of relatively small deep networks. For the first time, our work broadly studies the existing concept of neural ensembling in small data domains. We compare different ensemble configurations by varying the complexity of base members given a total fixed computational budget. We found that the state of the art is improved by keeping low the complexity of single models and increasing the ensemble dimension. Furthermore, we investigate the effectiveness of different losses and show that their choice should be made considering different factors. Finally, according to our results, our proposed ensemble resulted to be the most robust configuration considering variation in dataset type and size. Our findings are empirically validated on popular datasets after an extensive number of experiments.

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here