Boosting Deep Ensemble Performance with Hierarchical Pruning

Deep neural network ensembles have become attractive learning techniques with better generalizability over individual models. Some mission critical applications may require a large number of deep neural networks to achieve desirable accuracy and generalizability, making the ensemble execution costly with respect to runtime and space. This paper proposes a novel hierarchical ensemble pruning approach, which can effectively examine a given pool of M base models and identify smaller high quality deep ensembles of size S (<< M) with higher ensemble accuracy than the entire deep ensemble of all M models. Our hierarchical pruning approach, coined as HQ, combines three novel techniques. First, we show that the focal diversity metrics is innovative and can accurately capture the negative correlation among the member models of an ensemble, and the use of focal diversity metrics can boost ensemble accuracy. Second, we introduce a focal-diversity based hierarchical pruning algorithm to progressively identify low-cost ensembles with high ensemble diversity and accuracy. Third, we design a focal diversity consensus method to find smaller deep ensembles with low negative correlation. We demonstrate such ensembles offer high accuracy and high robustness while being more time and space efficient in ensemble decision making. Evaluated using two benchmark datasets, we show that the proposed focal diversity powered hierarchical pruning can find significantly smaller ensembles of deep neural network models while achieving the same or better classification generalizability.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods