Search Results for author: Zakaria Chihani

Found 9 papers, 0 papers with code

Sanity checks for patch visualisation in prototype-based image classification

no code implementations25 Oct 2023 Romain Xu-Darme, Georges Quénot, Zakaria Chihani, Marie-Christine Rousset

In this work, we perform an analysis of the visualisation methods implemented in ProtoPNet and ProtoTree, two self-explaining visual classifiers based on prototypes.

Image Classification

Contextualised Out-of-Distribution Detection using Pattern Identication

no code implementations24 Oct 2023 Romain Xu-Darme, Julien Girard-Satabin, Darryl Hond, Gabriele Incorvaia, Zakaria Chihani

In this work, we propose CODE, an extension of existing work from the field of explainable AI that identifies class-specific recurring patterns to build a robust Out-of-Distribution (OoD) detection method for visual classifiers.

Out-of-Distribution Detection Out of Distribution (OOD) Detection

Sanity checks and improvements for patch visualisation in prototype-based image classification

no code implementations20 Jan 2023 Romain Xu-Darme, Georges Quénot, Zakaria Chihani, Marie-Christine Rousset

In this work, we perform an in-depth analysis of the visualisation methods implemented in two popular self-explaining models for visual classification based on prototypes - ProtoPNet and ProtoTree.

Image Classification

PARTICUL: Part Identification with Confidence measure using Unsupervised Learning

no code implementations27 Jun 2022 Romain Xu-Darme, Georges Quénot, Zakaria Chihani, Marie-Christine Rousset

We apply our method on two public fine-grained datasets (Caltech-UCSD Bird 200 and Stanford Cars) and show that our detectors can consistently highlight parts of the object while providing a good measure of the confidence in their prediction.

CAISAR: A platform for Characterizing Artificial Intelligence Safety and Robustness

no code implementations7 Jun 2022 Julien Girard-Satabin, Michele Alberti, François Bobot, Zakaria Chihani, Augustin Lemesle

We present CAISAR, an open-source platform under active development for the characterization of AI systems' robustness and safety.

CAMUS: A Framework to Build Formal Specifications for Deep Perception Systems Using Simulators

no code implementations25 Nov 2019 Julien Girard-Satabin, Guillaume Charpiat, Zakaria Chihani, Marc Schoenauer

We propose to take advantage of the simulators often used either to train machine learning models or to check them with statistical tests, a growing trend in industry.

Adversarial Robustness

Cannot find the paper you are looking for? You can Submit a new open access paper.