Mixture of Experts with Uncertainty Voting for Imbalanced Deep Regression Problems

24 May 2023  ·  Yuchang Jiang, Vivien Sainte Fare Garnot, Konrad Schindler, Jan Dirk Wegner ·

Data imbalance is ubiquitous when applying machine learning to real-world problems, particularly regression problems. If training data are imbalanced, the learning is dominated by the densely covered regions of the target distribution, consequently, the learned regressor tends to exhibit poor performance in sparsely covered regions. Beyond standard measures like over-sampling or re-weighting, there are two main directions to handle learning from imbalanced data. For regression, recent work relies on the continuity of the distribution; whereas for classification there has been a trend to employ mixture-of-expert models and let some ensemble members specialize in predictions for the sparser regions. In our method, dubbed MOUV, we propose to leverage recent work on probabilistic deep learning and integrate it in a mixture-of-experts approach for imbalanced regression. We replace traditional regression losses with negative log-likelihood which also predicts sample-wise aleatoric uncertainty. We show experimentally that such a loss handles the imbalance better. Secondly, we use the readily available aleatoric uncertainty values to fuse the predictions of a mixture-of-experts model, thus obviating the need for a separate aggregation module. We compare our method with existing alternatives on multiple public benchmarks and show that MOUV consistently outperforms the prior art, while at the same time producing better calibrated uncertainty estimates. Our code is available at link-upon-publication.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods