Search Results for author: David Vigouroux

Found 9 papers, 5 papers with code

DP-SGD Without Clipping: The Lipschitz Neural Network Way

1 code implementation25 May 2023 Louis Bethune, Thomas Massena, Thibaut Boissin, Yannick Prudent, Corentin Friedrich, Franck Mamalet, Aurelien Bellet, Mathieu Serrurier, David Vigouroux

To provide sensitivity bounds and bypass the drawbacks of the clipping process, we propose to rely on Lipschitz constrained networks.

CRAFT: Concept Recursive Activation FacTorization for Explainability

1 code implementation CVPR 2023 Thomas Fel, Agustin Picard, Louis Bethune, Thibaut Boissin, David Vigouroux, Julien Colin, Rémi Cadène, Thomas Serre

However, recent research has exposed the limited practical value of these methods, attributed in part to their narrow focus on the most prominent regions of an image -- revealing "where" the model looks, but failing to elucidate "what" the model sees in those areas.

Making Sense of Dependence: Efficient Black-box Explanations Using Dependence Measure

1 code implementation13 Jun 2022 Paul Novello, Thomas Fel, David Vigouroux

HSIC measures the dependence between regions of an input image and the output of a model based on kernel embeddings of distributions.

object-detection Object Detection

How Good is your Explanation? Algorithmic Stability Measures to Assess the Quality of Explanations for Deep Neural Networks

no code implementations7 Sep 2020 Thomas Fel, David Vigouroux, Rémi Cadène, Thomas Serre

A plethora of methods have been proposed to explain how deep neural networks reach their decisions but comparatively, little effort has been made to ensure that the explanations produced by these methods are objectively relevant.

Image Classification

FUNN: Flexible Unsupervised Neural Network

no code implementations5 Nov 2018 David Vigouroux, Sylvain Picard

We propose a method to obtain robust features in unsupervised learning tasks against adversarial attacks.

Classification General Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.