Search Results for author: Raphael Gontijo-Lopes

Found 8 papers, 3 papers with code

Affinity and Diversity: Quantifying Mechanisms of Data Augmentation

no code implementations20 Feb 2020 Raphael Gontijo-Lopes, Sylvia J. Smullin, Ekin D. Cubuk, Ethan Dyer

Though data augmentation has become a standard component of deep neural network training, the underlying mechanism behind the effectiveness of these techniques remains poorly understood.

Data Augmentation

Tradeoffs in Data Augmentation: An Empirical Study

no code implementations ICLR 2021 Raphael Gontijo-Lopes, Sylvia Smullin, Ekin Dogus Cubuk, Ethan Dyer

Though data augmentation has become a standard component of deep neural network training, the underlying mechanism behind the effectiveness of these techniques remains poorly understood.

Data Augmentation

Robust fine-tuning of zero-shot models

3 code implementations CVPR 2022 Mitchell Wortsman, Gabriel Ilharco, Jong Wook Kim, Mike Li, Simon Kornblith, Rebecca Roelofs, Raphael Gontijo-Lopes, Hannaneh Hajishirzi, Ali Farhadi, Hongseok Namkoong, Ludwig Schmidt

Compared to standard fine-tuning, WiSE-FT provides large accuracy improvements under distribution shift, while preserving high accuracy on the target distribution.

Ranked #12 on Image Classification on ObjectNet (using extra training data)

Image Classification Transfer Learning

Spectral Bias in Practice: The Role of Function Frequency in Generalization

no code implementations6 Oct 2021 Sara Fridovich-Keil, Raphael Gontijo-Lopes, Rebecca Roelofs

We also explore the connections between function frequency and image frequency and find that spectral bias is sensitive to the low frequencies prevalent in natural images.

Data Augmentation Image Classification

No One Representation to Rule Them All: Overlapping Features of Training Methods

no code implementations ICLR 2022 Raphael Gontijo-Lopes, Yann Dauphin, Ekin D. Cubuk

Despite being able to capture a range of features of the data, high accuracy models trained with supervision tend to make similar predictions.

Contrastive Learning

Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time

5 code implementations10 Mar 2022 Mitchell Wortsman, Gabriel Ilharco, Samir Yitzhak Gadre, Rebecca Roelofs, Raphael Gontijo-Lopes, Ari S. Morcos, Hongseok Namkoong, Ali Farhadi, Yair Carmon, Simon Kornblith, Ludwig Schmidt

The conventional recipe for maximizing model accuracy is to (1) train multiple models with various hyperparameters and (2) pick the individual model which performs best on a held-out validation set, discarding the remainder.

 Ranked #1 on Image Classification on ImageNet V2 (using extra training data)

Domain Generalization Image Classification +2

When does dough become a bagel? Analyzing the remaining mistakes on ImageNet

1 code implementation9 May 2022 Vijay Vasudevan, Benjamin Caine, Raphael Gontijo-Lopes, Sara Fridovich-Keil, Rebecca Roelofs

To help contextualize progress on ImageNet and provide a more meaningful evaluation for today's state-of-the-art models, we manually review and categorize every remaining mistake that a few top models make in order to provide insight into the long-tail of errors on one of the most benchmarked datasets in computer vision.

Image Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.