Search Results for author: Léo Dreyfus-Schmidt

Found 6 papers, 3 papers with code

Transferability Metrics for Object Detection

1 code implementation27 Jun 2023 Louis Fouquet, Simona Maggio, Léo Dreyfus-Schmidt

In our experiments, we compare TLogME to state-of-the-art metrics in the estimation of transfer performance of the Faster-RCNN object detector.

Object object-detection +2

Towards Clear Expectations for Uncertainty Estimation

no code implementations27 Jul 2022 Victor Bouvier, Simona Maggio, Alexandre Abraham, Léo Dreyfus-Schmidt

If Uncertainty Quantification (UQ) is crucial to achieve trustworthy Machine Learning (ML), most UQ methods suffer from disparate and inconsistent evaluation protocols.

Uncertainty Quantification

Performance Prediction Under Dataset Shift

1 code implementation21 Jun 2022 Simona Maggio, Victor Bouvier, Léo Dreyfus-Schmidt

ML models deployed in production often have to face unknown domain changes, fundamentally different from their training settings.

Sample Noise Impact on Active Learning

2 code implementations3 Sep 2021 Alexandre Abraham, Léo Dreyfus-Schmidt

This work explores the effect of noisy sample selection in active learning strategies.

Active Learning

Ensembling Shift Detectors: an Extensive Empirical Evaluation

no code implementations28 Jun 2021 Simona Maggio, Léo Dreyfus-Schmidt

The term dataset shift refers to the situation where the data used to train a machine learning model is different from where the model operates.

Rebuilding Trust in Active Learning with Actionable Metrics

no code implementations18 Dec 2020 Alexandre Abraham, Léo Dreyfus-Schmidt

Active Learning (AL) is an active domain of research, but is seldom used in the industry despite the pressing needs.

Active Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.