Search Results for author: Thibaut Boissin

Found 9 papers, 7 papers with code

Unlocking Feature Visualization for Deeper Networks with MAgnitude Constrained Optimization

1 code implementation11 Jun 2023 Thomas Fel, Thibaut Boissin, Victor Boutin, Agustin Picard, Paul Novello, Julien Colin, Drew Linsley, Tom Rousseau, Rémi Cadène, Laurent Gardes, Thomas Serre

However, its widespread adoption has been limited due to a reliance on tricks to generate interpretable images, and corresponding challenges in scaling it to deeper neural networks.

Adversarial alignment: Breaking the trade-off between the strength of an attack and its relevance to human perception

no code implementations5 Jun 2023 Drew Linsley, Pinyuan Feng, Thibaut Boissin, Alekh Karkada Ashok, Thomas Fel, Stephanie Olaiya, Thomas Serre

Harmonized DNNs achieve the best of both worlds and experience attacks that are detectable and affect features that humans find diagnostic for recognition, meaning that attacks on these models are more likely to be rendered ineffective by inducing similar effects on human perception.

Adversarial Attack Adversarial Robustness +2

DP-SGD Without Clipping: The Lipschitz Neural Network Way

1 code implementation25 May 2023 Louis Bethune, Thomas Massena, Thibaut Boissin, Yannick Prudent, Corentin Friedrich, Franck Mamalet, Aurelien Bellet, Mathieu Serrurier, David Vigouroux

To provide sensitivity bounds and bypass the drawbacks of the clipping process, we propose to rely on Lipschitz constrained networks.

Robust One-Class Classification with Signed Distance Function using 1-Lipschitz Neural Networks

1 code implementation26 Jan 2023 Louis Bethune, Paul Novello, Thibaut Boissin, Guillaume Coiffier, Mathieu Serrurier, Quentin Vincenot, Andres Troya-Galvis

The distance to the support can be interpreted as a normality score, and its approximation using 1-Lipschitz neural networks provides robustness bounds against $l2$ adversarial attacks, an under-explored weakness of deep learning-based OCC algorithms.

Image Generation One-Class Classification

CRAFT: Concept Recursive Activation FacTorization for Explainability

1 code implementation CVPR 2023 Thomas Fel, Agustin Picard, Louis Bethune, Thibaut Boissin, David Vigouroux, Julien Colin, Rémi Cadène, Thomas Serre

However, recent research has exposed the limited practical value of these methods, attributed in part to their narrow focus on the most prominent regions of an image -- revealing "where" the model looks, but failing to elucidate "what" the model sees in those areas.

On the explainable properties of 1-Lipschitz Neural Networks: An Optimal Transport Perspective

no code implementations NeurIPS 2023 Mathieu Serrurier, Franck Mamalet, Thomas Fel, Louis Béthune, Thibaut Boissin

Input gradients have a pivotal role in a variety of applications, including adversarial attack algorithms for evaluating model robustness, explainable AI techniques for generating Saliency Maps, and counterfactual explanations. However, Saliency Maps generated by traditional neural networks are often noisy and provide limited insights.

Adversarial Attack counterfactual +1

Cannot find the paper you are looking for? You can Submit a new open access paper.