no code implementations • 25 Oct 2023 • Romain Xu-Darme, Jenny Benois-Pineau, Romain Giot, Georges Quénot, Zakaria Chihani, Marie-Christine Rousset, Alexey Zhukov
In the field of Explainable AI, multiples evaluation metrics have been proposed in order to assess the quality of explanation methods w. r. t.
no code implementations • 25 Oct 2023 • Romain Xu-Darme, Georges Quénot, Zakaria Chihani, Marie-Christine Rousset
In this work, we perform an analysis of the visualisation methods implemented in ProtoPNet and ProtoTree, two self-explaining visual classifiers based on prototypes.
no code implementations • 24 Oct 2023 • Romain Xu-Darme, Julien Girard-Satabin, Darryl Hond, Gabriele Incorvaia, Zakaria Chihani
In this work, we propose CODE, an extension of existing work from the field of explainable AI that identifies class-specific recurring patterns to build a robust Out-of-Distribution (OoD) detection method for visual classifiers.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
no code implementations • 24 Jan 2023 • Romain Xu-Darme, Julien Girard-Satabin, Darryl Hond, Gabriele Incorvaia, Zakaria Chihani
Out-of-distribution (OoD) detection for data-based programs is a goal of paramount importance.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
no code implementations • 20 Jan 2023 • Romain Xu-Darme, Georges Quénot, Zakaria Chihani, Marie-Christine Rousset
In this work, we perform an in-depth analysis of the visualisation methods implemented in two popular self-explaining models for visual classification based on prototypes - ProtoPNet and ProtoTree.
no code implementations • 27 Jun 2022 • Romain Xu-Darme, Georges Quénot, Zakaria Chihani, Marie-Christine Rousset
We apply our method on two public fine-grained datasets (Caltech-UCSD Bird 200 and Stanford Cars) and show that our detectors can consistently highlight parts of the object while providing a good measure of the confidence in their prediction.
no code implementations • 7 Jun 2022 • Julien Girard-Satabin, Michele Alberti, François Bobot, Zakaria Chihani, Augustin Lemesle
We present CAISAR, an open-source platform under active development for the characterization of AI systems' robustness and safety.
no code implementations • 17 May 2021 • Julien Girard-Satabin, Aymeric Varasse, Marc Schoenauer, Guillaume Charpiat, Zakaria Chihani
The impressive results of modern neural networks partly come from their non linear behaviour.
no code implementations • 25 Nov 2019 • Julien Girard-Satabin, Guillaume Charpiat, Zakaria Chihani, Marc Schoenauer
We propose to take advantage of the simulators often used either to train machine learning models or to check them with statistical tests, a growing trend in industry.