Task Calibration for Distributional Uncertainty in Few-Shot Classification

1 Jan 2021  ·  Sungnyun Kim, Se-Young Yun ·

As numerous meta-learning algorithms improve performance when solving few-shot classification problems for practical applications, accurate prediction of uncertainty, though challenging, has been considered essential. Recent works have focused on Bayesian methods, but they do not handle complex and high-dimensional data sufficiently well. In this study, we contemplate modeling uncertainty in a few-shot classification framework and propose a straightforward method that appropriately predicts task uncertainty. Specifically, we measure the distributional mismatch between support and query sets via class-wise similarities. We emphasize our model's ability to apply uncertainty estimation without relying on posterior approximation unlike the Bayesian methods. Moreover, our method is readily expanded to include a range of meta-learning models. Through extensive experiments including dataset shift, we present that our training strategy helps the model avoid being indiscriminately confident, and thereby, produce calibrated classification results without the loss of accuracy.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here