# Ensemble Distribution Distillation

Ensembles of models often yield improvements in system performance. These ensemble approaches have also been empirically shown to yield robust measures of uncertainty, and are capable of distinguishing between different \emph{forms} of uncertainty. However, ensembles come at a computational and memory cost which may be prohibitive for many applications. There has been significant work done on the distillation of an ensemble into a single model. Such approaches decrease computational cost and allow a single model to achieve an accuracy comparable to that of an ensemble. However, information about the \emph{diversity} of the ensemble, which can yield estimates of different forms of uncertainty, is lost. This work considers the novel task of \emph{Ensemble Distribution Distillation} (EnD$^2$) --- distilling the distribution of the predictions from an ensemble, rather than just the average prediction, into a single model. EnD$^2$ enables a single model to retain both the improved classification performance of ensemble distillation as well as information about the diversity of the ensemble, which is useful for uncertainty estimation. A solution for EnD$^2$ based on Prior Networks, a class of models which allow a single neural network to explicitly model a distribution over output distributions, is proposed in this work. The properties of EnD$^2$ are investigated on both an artificial dataset, and on the CIFAR-10, CIFAR-100 and TinyImageNet datasets, where it is shown that EnD$^2$ can approach the classification performance of an ensemble, and outperforms both standard DNNs and Ensemble Distillation on the tasks of misclassification and out-of-distribution input detection.

PDF Abstract ICLR 2020 PDF ICLR 2020 Abstract

## Reproducibility Reports

Jan 31 2021
[Re] A Reproduction of Ensemble Distribution Distillation
, ,

Our findings support the authorsʼ central claims. In terms of uncertainty estimation our EnD2 achieved (99 ± 1) % of the AUC-ROC of our ensemble on the OOD-detection task. The corresponding value in the original paper was (100±1) %. In terms of classification our EnD2 had (16 ± 1)% higher error than our ensemble. The corresponding values in the original paper was (11 ± 6)%. Other metrics showed similar agreement, but, significantly, in the OOD-detection task our EnD performed at least as well as our EnD2 . This is in stark contrast with the original paper. We also took a novel approach to visualizing the uncertainty decomposition by plotting the resulting distributions on a simplex, offering a visual explanation to some surprising results in the original paper, while mostly supporting the authorsʼ intuitive justifications for the model.

## Results from the Paper Edit

Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.