Search Results for author: Sylvestre-Alvise Rebuffi

Found 11 papers, 8 papers with code

AutoNovel: Automatically Discovering and Learning Novel Visual Categories

no code implementations29 Jun 2021 Kai Han, Sylvestre-Alvise Rebuffi, Sébastien Ehrhardt, Andrea Vedaldi, Andrew Zisserman

We present a new approach called AutoNovel to address this problem by combining three ideas: (1) we suggest that the common approach of bootstrapping an image representation using the labelled data only introduces an unwanted bias, and that this can be avoided by using self-supervised learning to train the representation from scratch on the union of labelled and unlabelled data; (2) we use ranking statistics to transfer the model's knowledge of the labelled classes to the problem of clustering the unlabelled images; and, (3) we train the data representation by optimizing a joint objective function on the labelled and unlabelled subsets of the data, improving both the supervised classification of the labelled data, and the clustering of the unlabelled data.

Image Clustering Self-Supervised Learning

Defending Against Image Corruptions Through Adversarial Augmentations

no code implementations2 Apr 2021 Dan A. Calian, Florian Stimberg, Olivia Wiles, Sylvestre-Alvise Rebuffi, Andras Gyorgy, Timothy Mann, Sven Gowal

Modern neural networks excel at image classification, yet they remain vulnerable to common image corruptions such as blur, speckle noise or fog.

Image Classification

Fixing Data Augmentation to Improve Adversarial Robustness

1 code implementation2 Mar 2021 Sylvestre-Alvise Rebuffi, Sven Gowal, Dan A. Calian, Florian Stimberg, Olivia Wiles, Timothy Mann

In particular, against $\ell_\infty$ norm-bounded perturbations of size $\epsilon = 8/255$, our model reaches 64. 20% robust accuracy without using any external data, beating most prior works that use external data.

Data Augmentation

There and Back Again: Revisiting Backpropagation Saliency Methods

1 code implementation CVPR 2020 Sylvestre-Alvise Rebuffi, Ruth Fong, Xu Ji, Andrea Vedaldi

Saliency methods seek to explain the predictions of a model by producing an importance map across each input sample.

Meta-Learning

Automatically Discovering and Learning New Visual Categories with Ranking Statistics

1 code implementation ICLR 2020 Kai Han, Sylvestre-Alvise Rebuffi, Sebastien Ehrhardt, Andrea Vedaldi, Andrew Zisserman

In this work we address this problem by combining three ideas: (1) we suggest that the common approach of bootstrapping an image representation using the labeled data only introduces an unwanted bias, and that this can be avoided by using self-supervised learning to train the representation from scratch on the union of labelled and unlabelled data; (2) we use rank statistics to transfer the model's knowledge of the labelled classes to the problem of clustering the unlabelled images; and, (3) we train the data representation by optimizing a joint objective function on the labelled and unlabelled subsets of the data, improving both the supervised classification of the labelled data, and the clustering of the unlabelled data.

General Classification Self-Supervised Learning

NormGrad: Finding the Pixels that Matter for Training

no code implementations19 Oct 2019 Sylvestre-Alvise Rebuffi, Ruth Fong, Xu Ji, Hakan Bilen, Andrea Vedaldi

In this paper, we are rather interested by the locations of an image that contribute to the model's training.

Meta-Learning

Semi-Supervised Learning with Scarce Annotations

1 code implementation21 May 2019 Sylvestre-Alvise Rebuffi, Sebastien Ehrhardt, Kai Han, Andrea Vedaldi, Andrew Zisserman

The first is a simple but effective one: we leverage the power of transfer learning among different tasks and self-supervision to initialize a good representation of the data without making use of any label.

Multi-class Classification Self-Supervised Learning +1

iCaRL: Incremental Classifier and Representation Learning

5 code implementations CVPR 2017 Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, Christoph H. Lampert

A major open problem on the road to artificial intelligence is the development of incrementally learning systems that learn about more and more concepts over time from a stream of data.

Incremental Learning Representation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.