Search Results for author: Teodora Popordanoska

Found 10 papers, 4 papers with code

Estimating calibration error under label shift without labels

no code implementations14 Dec 2023 Teodora Popordanoska, Gorjan Radevski, Tinne Tuytelaars, Matthew B. Blaschko

In the face of dataset shift, model calibration plays a pivotal role in ensuring the reliability of machine learning systems.

Consistent and Asymptotically Unbiased Estimation of Proper Calibration Errors

no code implementations14 Dec 2023 Teodora Popordanoska, Sebastian G. Gruber, Aleksei Tiulpin, Florian Buettner, Matthew B. Blaschko

Proper scoring rules evaluate the quality of probabilistic predictions, playing an essential role in the pursuit of accurate and well-calibrated models.

Beyond Classification: Definition and Density-based Estimation of Calibration in Object Detection

1 code implementation11 Dec 2023 Teodora Popordanoska, Aleksei Tiulpin, Matthew B. Blaschko

Despite their impressive predictive performance in various computer vision tasks, deep neural networks (DNNs) tend to make overly confident predictions, which hinders their widespread use in safety-critical applications.

Density Estimation Object +2

Dice Semimetric Losses: Optimizing the Dice Score with Soft Labels

1 code implementation28 Mar 2023 Zifu Wang, Teodora Popordanoska, Jeroen Bertels, Robin Lemmens, Matthew B. Blaschko

As a result, we obtain superior Dice scores and model calibration, which supports the wider adoption of DMLs in practice.

Knowledge Distillation

A Consistent and Differentiable Lp Canonical Calibration Error Estimator

1 code implementation13 Oct 2022 Teodora Popordanoska, Raphael Sayer, Matthew B. Blaschko

As a remedy, we propose a low-bias, trainable calibration error estimator based on Dirichlet kernel density estimates, which asymptotically converges to the true $L_p$ calibration error.

Calibration Regularized Training of Deep Neural Networks using Kernel Density Estimation

no code implementations29 Sep 2021 Teodora Popordanoska, Raphael Sayer, Matthew B. Blaschko

The computational complexity of our estimator is O(n^2), matching that of the kernel maximum mean discrepancy, used in a previously considered trainable calibration estimator.

Autonomous Driving Density Estimation +1

Machine Guides, Human Supervises: Interactive Learning with Global Explanations

no code implementations21 Sep 2020 Teodora Popordanoska, Mohit Kumar, Stefano Teso

Compared to other explanatory interactive learning strategies, which are machine-initiated and rely on local explanations, XGL is designed to be robust against cases in which the explanations supplied by the machine oversell the classifier's quality.

Toward Machine-Guided, Human-Initiated Explanatory Interactive Learning

no code implementations20 Jul 2020 Teodora Popordanoska, Mohit Kumar, Stefano Teso

This biases the "narrative" presented by the machine to the user. We address this narrative bias by introducing explanatory guided learning, a novel interactive learning strategy in which: i) the supervisor is in charge of choosing the query instances, while ii) the machine uses global explanations to illustrate its overall behavior and to guide the supervisor toward choosing challenging, informative instances.

Active Learning Clustering

Cannot find the paper you are looking for? You can Submit a new open access paper.