Classifier calibration

14 papers with code • 1 benchmarks • 1 datasets

Confidence calibration – the problem of predicting probability estimates representative of the true correctness likelihood – is important for classification models in many applications. The two common calibration metrics are Expected Calibration Error (ECE) and Maximum Calibration Error (MCE).

Datasets


Most implemented papers

Masksembles for Uncertainty Estimation

nikitadurasov/masksembles CVPR 2021

Our central intuition is that there is a continuous spectrum of ensemble-like models of which MC-Dropout and Deep Ensembles are extreme examples.

Multivariate Confidence Calibration for Object Detection

fabiankueppers/calibration-framework 28 Apr 2020

Therefore, we present a novel framework to measure and calibrate biased (or miscalibrated) confidence estimates of object detection methods.

How Well Do Self-Supervised Models Transfer?

linusericsson/ssl-transfer CVPR 2021

We evaluate the transfer performance of 13 top self-supervised models on 40 downstream tasks, including many-shot and few-shot recognition, object detection, and dense prediction.

Classifier Calibration: with application to threat scores in cybersecurity

isotlaboratory/ClassifierCalibration-Code 9 Feb 2021

A calibrator is a function that maps the arbitrary classifier score, of a testing observation, onto $[0, 1]$ to provide an estimate for the posterior probability of belonging to one of the two classes.

Danish Fungi 2020 -- Not Just Another Image Recognition Dataset

picekl/DanishFungiDataset 18 Mar 2021

Interestingly, ViT achieves results superior to CNN baselines with 80. 45% accuracy and 0. 743 macro F1 score, reducing the CNN error by 9% and 12% respectively.

No Fear of Heterogeneity: Classifier Calibration for Federated Learning with Non-IID Data

KarhouTam/FL-bench NeurIPS 2021

Motivated by the above findings, we propose a novel and simple algorithm called Classifier Calibration with Virtual Representations (CCVR), which adjusts the classifier using virtual representations sampled from an approximated gaussian mixture model.

Generalized and Incremental Few-Shot Learning by Explicit Learning and Calibration without Forgetting

annusha/lcwof ICCV 2021

Both generalized and incremental few-shot learning have to deal with three major challenges: learning novel classes from only few samples per class, preventing catastrophic forgetting of base classes, and classifier calibration across novel and base classes.

Hidden Heterogeneity: When to Choose Similarity-Based Calibration

wkiri/simcalib 3 Feb 2022

However, these methods are unable to detect subpopulations where calibration could also improve prediction accuracy.

Class-wise and reduced calibration methods

appliedai-initiative/classwise-calibration-experiments 7 Oct 2022

We prove for several notions of calibration that solving the reduced problem minimizes the corresponding notion of miscalibration in the full problem, allowing the use of non-parametric recalibration methods that fail in higher dimensions.