no code implementations • 24 Oct 2023 • Romain Xu-Darme, Julien Girard-Satabin, Darryl Hond, Gabriele Incorvaia, Zakaria Chihani
In this work, we propose CODE, an extension of existing work from the field of explainable AI that identifies class-specific recurring patterns to build a robust Out-of-Distribution (OoD) detection method for visual classifiers.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
no code implementations • 24 Jan 2023 • Romain Xu-Darme, Julien Girard-Satabin, Darryl Hond, Gabriele Incorvaia, Zakaria Chihani
Out-of-distribution (OoD) detection for data-based programs is a goal of paramount importance.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
no code implementations • 7 Jun 2022 • Julien Girard-Satabin, Michele Alberti, François Bobot, Zakaria Chihani, Augustin Lemesle
We present CAISAR, an open-source platform under active development for the characterization of AI systems' robustness and safety.
no code implementations • 17 May 2021 • Julien Girard-Satabin, Aymeric Varasse, Marc Schoenauer, Guillaume Charpiat, Zakaria Chihani
The impressive results of modern neural networks partly come from their non linear behaviour.
no code implementations • 25 Nov 2019 • Julien Girard-Satabin, Guillaume Charpiat, Zakaria Chihani, Marc Schoenauer
We propose to take advantage of the simulators often used either to train machine learning models or to check them with statistical tests, a growing trend in industry.