1 code implementation • 14 Sep 2023 • Guillaume Jeanneret, Loïc Simon, Frédéric Jurie
This paper addresses the challenge of generating Counterfactual Explanations (CEs), involving the identification and modification of the fewest necessary features to alter a classifier's prediction for a given image.
1 code implementation • CVPR 2023 • Guillaume Jeanneret, Loïc Simon, Frédéric Jurie
Counterfactual explanations and adversarial attacks have a related goal: flipping output labels with minimal perturbations regardless of their characteristics.
1 code implementation • 29 Mar 2022 • Guillaume Jeanneret, Loïc Simon, Frédéric Jurie
Counterfactual explanations have shown promising results as a post-hoc framework to make image classifiers more explainable.
no code implementations • 16 Sep 2021 • Rodrigue Siry, Louis Hémadou, Loïc Simon, Frédéric Jurie
Domain alignment is currently the most prevalent solution to unsupervised domain-adaptation tasks and are often being presented as minimizers of some theoretical upper-bounds on risk in the target domain.
no code implementations • 20 Aug 2019 • Michel Moukari, Loïc Simon, Sylvaine Picard, Frédéric Jurie
As deep learning applications are becoming more and more pervasive in robotics, the question of evaluating the reliability of inferences becomes a central question in the robotics community.
1 code implementation • 14 May 2019 • Loïc Simon, Ryan Webster, Julien Rabin
In this article we revisit the definition of Precision-Recall (PR) curves for generative models proposed by Sajjadi et al. (arXiv:1806. 00035).
no code implementations • 27 Sep 2018 • Michel Moukari, Loïc Simon, Sylvaine Picard, Frédéric Jurie
One contribution of this article is to draw attention on existing metrics developed in the forecast community, designed to evaluate both the sharpness and the calibration of predictive uncertainty.
no code implementations • 8 Feb 2017 • Mateusz Koziński, Loïc Simon, Frédéric Jurie
We propose a method for semi-supervised training of structured-output neural networks.