Classifier calibration
16 papers with code • 1 benchmarks • 1 datasets
Confidence calibration – the problem of predicting probability estimates representative of the true correctness likelihood – is important for classification models in many applications. The two common calibration metrics are Expected Calibration Error (ECE) and Maximum Calibration Error (MCE).
Most implemented papers
Masksembles for Uncertainty Estimation
Our central intuition is that there is a continuous spectrum of ensemble-like models of which MC-Dropout and Deep Ensembles are extreme examples.
No Fear of Heterogeneity: Classifier Calibration for Federated Learning with Non-IID Data
Motivated by the above findings, we propose a novel and simple algorithm called Classifier Calibration with Virtual Representations (CCVR), which adjusts the classifier using virtual representations sampled from an approximated gaussian mixture model.
Multi-class probabilistic classification using inductive and cross Venn-Abers predictors
Inductive (IVAP) and cross (CVAP) Venn–Abers predictors are computationally efficient algorithms for probabilistic prediction in binary classification problems.
Multivariate Confidence Calibration for Object Detection
Therefore, we present a novel framework to measure and calibrate biased (or miscalibrated) confidence estimates of object detection methods.
How Well Do Self-Supervised Models Transfer?
We evaluate the transfer performance of 13 top self-supervised models on 40 downstream tasks, including many-shot and few-shot recognition, object detection, and dense prediction.
Classifier Calibration: with application to threat scores in cybersecurity
A calibrator is a function that maps the arbitrary classifier score, of a testing observation, onto $[0, 1]$ to provide an estimate for the posterior probability of belonging to one of the two classes.
Danish Fungi 2020 -- Not Just Another Image Recognition Dataset
Interestingly, ViT achieves results superior to CNN baselines with 80. 45% accuracy and 0. 743 macro F1 score, reducing the CNN error by 9% and 12% respectively.
Generalized and Incremental Few-Shot Learning by Explicit Learning and Calibration without Forgetting
Both generalized and incremental few-shot learning have to deal with three major challenges: learning novel classes from only few samples per class, preventing catastrophic forgetting of base classes, and classifier calibration across novel and base classes.
Hidden Heterogeneity: When to Choose Similarity-Based Calibration
However, these methods are unable to detect subpopulations where calibration could also improve prediction accuracy.
Class-wise and reduced calibration methods
We prove for several notions of calibration that solving the reduced problem minimizes the corresponding notion of miscalibration in the full problem, allowing the use of non-parametric recalibration methods that fail in higher dimensions.