Search Results for author: Sylvestre-Alvise Rebuffi

Found 17 papers, 12 papers with code

Fixing Data Augmentation to Improve Adversarial Robustness

6 code implementations2 Mar 2021 Sylvestre-Alvise Rebuffi, Sven Gowal, Dan A. Calian, Florian Stimberg, Olivia Wiles, Timothy Mann

In particular, against $\ell_\infty$ norm-bounded perturbations of size $\epsilon = 8/255$, our model reaches 64. 20% robust accuracy without using any external data, beating most prior works that use external data.

Adversarial Robustness Data Augmentation

Data Augmentation Can Improve Robustness

1 code implementation NeurIPS 2021 Sylvestre-Alvise Rebuffi, Sven Gowal, Dan A. Calian, Florian Stimberg, Olivia Wiles, Timothy Mann

Adversarial training suffers from robust overfitting, a phenomenon where the robust test accuracy starts to decrease during training.

Data Augmentation

iCaRL: Incremental Classifier and Representation Learning

9 code implementations CVPR 2017 Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, Christoph H. Lampert

A major open problem on the road to artificial intelligence is the development of incrementally learning systems that learn about more and more concepts over time from a stream of data.

Class Incremental Learning Incremental Learning +1

Automatically Discovering and Learning New Visual Categories with Ranking Statistics

1 code implementation ICLR 2020 Kai Han, Sylvestre-Alvise Rebuffi, Sebastien Ehrhardt, Andrea Vedaldi, Andrew Zisserman

In this work we address this problem by combining three ideas: (1) we suggest that the common approach of bootstrapping an image representation using the labeled data only introduces an unwanted bias, and that this can be avoided by using self-supervised learning to train the representation from scratch on the union of labelled and unlabelled data; (2) we use rank statistics to transfer the model's knowledge of the labelled classes to the problem of clustering the unlabelled images; and, (3) we train the data representation by optimizing a joint objective function on the labelled and unlabelled subsets of the data, improving both the supervised classification of the labelled data, and the clustering of the unlabelled data.

Clustering General Classification +1

AutoNovel: Automatically Discovering and Learning Novel Visual Categories

1 code implementation29 Jun 2021 Kai Han, Sylvestre-Alvise Rebuffi, Sébastien Ehrhardt, Andrea Vedaldi, Andrew Zisserman

We present a new approach called AutoNovel to address this problem by combining three ideas: (1) we suggest that the common approach of bootstrapping an image representation using the labelled data only introduces an unwanted bias, and that this can be avoided by using self-supervised learning to train the representation from scratch on the union of labelled and unlabelled data; (2) we use ranking statistics to transfer the model's knowledge of the labelled classes to the problem of clustering the unlabelled images; and, (3) we train the data representation by optimizing a joint objective function on the labelled and unlabelled subsets of the data, improving both the supervised classification of the labelled data, and the clustering of the unlabelled data.

Clustering Image Clustering +2

Improving Robustness using Generated Data

1 code implementation NeurIPS 2021 Sven Gowal, Sylvestre-Alvise Rebuffi, Olivia Wiles, Florian Stimberg, Dan Andrei Calian, Timothy Mann

Against $\ell_\infty$ norm-bounded perturbations of size $\epsilon = 8/255$, our models achieve 66. 10% and 33. 49% robust accuracy on CIFAR-10 and CIFAR-100, respectively (improving upon the state-of-the-art by +8. 96% and +3. 29%).

Adversarial Robustness

There and Back Again: Revisiting Backpropagation Saliency Methods

1 code implementation CVPR 2020 Sylvestre-Alvise Rebuffi, Ruth Fong, Xu Ji, Andrea Vedaldi

Saliency methods seek to explain the predictions of a model by producing an importance map across each input sample.

Meta-Learning

Semi-Supervised Learning with Scarce Annotations

1 code implementation21 May 2019 Sylvestre-Alvise Rebuffi, Sebastien Ehrhardt, Kai Han, Andrea Vedaldi, Andrew Zisserman

The first is a simple but effective one: we leverage the power of transfer learning among different tasks and self-supervision to initialize a good representation of the data without making use of any label.

Multi-class Classification Self-Supervised Learning +1

NormGrad: Finding the Pixels that Matter for Training

no code implementations19 Oct 2019 Sylvestre-Alvise Rebuffi, Ruth Fong, Xu Ji, Hakan Bilen, Andrea Vedaldi

In this paper, we are rather interested by the locations of an image that contribute to the model's training.

Meta-Learning

Defending Against Image Corruptions Through Adversarial Augmentations

no code implementations ICLR 2022 Dan A. Calian, Florian Stimberg, Olivia Wiles, Sylvestre-Alvise Rebuffi, Andras Gyorgy, Timothy Mann, Sven Gowal

Modern neural networks excel at image classification, yet they remain vulnerable to common image corruptions such as blur, speckle noise or fog.

Image Classification

Revisiting adapters with adversarial training

no code implementations10 Oct 2022 Sylvestre-Alvise Rebuffi, Francesco Croce, Sven Gowal

By co-training a neural network on clean and adversarial inputs, it is possible to improve classification accuracy on the clean, non-adversarial inputs.

Seasoning Model Soups for Robustness to Adversarial and Natural Distribution Shifts

no code implementations CVPR 2023 Francesco Croce, Sylvestre-Alvise Rebuffi, Evan Shelhamer, Sven Gowal

Adversarial training is widely used to make classifiers robust to a specific threat or adversary, such as $\ell_p$-norm bounded perturbations of a given $p$-norm.

Cannot find the paper you are looking for? You can Submit a new open access paper.