Search Results for author: Quentin Garrido

Found 10 papers, 4 papers with code

Learning and Leveraging World Models in Visual Representation Learning

no code implementations1 Mar 2024 Quentin Garrido, Mahmoud Assran, Nicolas Ballas, Adrien Bardes, Laurent Najman, Yann Lecun

Joint-Embedding Predictive Architecture (JEPA) has emerged as a promising self-supervised approach that learns by leveraging a world model.

Representation Learning

Revisiting Feature Prediction for Learning Visual Representations from Video

1 code implementation arXiv preprint 2024 Adrien Bardes, Quentin Garrido, Jean Ponce, Xinlei Chen, Michael Rabbat, Yann Lecun, Mahmoud Assran, Nicolas Ballas

This paper explores feature prediction as a stand-alone objective for unsupervised learning from video and introduces V-JEPA, a collection of vision models trained solely using a feature prediction objective, without the use of pretrained image encoders, text, negative examples, reconstruction, or other sources of supervision.

Self-Supervised Learning with Lie Symmetries for Partial Differential Equations

1 code implementation NeurIPS 2023 Grégoire Mialon, Quentin Garrido, Hannah Lawrence, Danyal Rehman, Yann Lecun, Bobak T. Kiani

Machine learning for differential equations paves the way for computationally efficient alternatives to numerical solvers, with potentially broad impacts in science and engineering.

Representation Learning Self-Supervised Learning

Self-supervised learning of Split Invariant Equivariant representations

1 code implementation14 Feb 2023 Quentin Garrido, Laurent Najman, Yann Lecun

We hope that both our introduced dataset and approach will enable learning richer representations without supervision in more complex scenarios.

Self-Supervised Learning

The Robustness Limits of SoTA Vision Models to Natural Variation

no code implementations24 Oct 2022 Mark Ibrahim, Quentin Garrido, Ari Morcos, Diane Bouchacourt

We study not only how robust recent state-of-the-art models are, but also the extent to which models can generalize variation in factors when they're present during training.

RankMe: Assessing the downstream performance of pretrained self-supervised representations by their rank

no code implementations5 Oct 2022 Quentin Garrido, Randall Balestriero, Laurent Najman, Yann Lecun

Joint-Embedding Self Supervised Learning (JE-SSL) has seen a rapid development, with the emergence of many method variations but only few principled guidelines that would help practitioners to successfully deploy them.

Self-Supervised Learning

Guillotine Regularization: Why removing layers is needed to improve generalization in Self-Supervised Learning

no code implementations27 Jun 2022 Florian Bordes, Randall Balestriero, Quentin Garrido, Adrien Bardes, Pascal Vincent

This is a little vexing, as one would hope that the network layer at which invariance is explicitly enforced by the SSL criterion during training (the last projector layer) should be the one to use for best generalization performance downstream.

Self-Supervised Learning Transfer Learning

On the duality between contrastive and non-contrastive self-supervised learning

no code implementations3 Jun 2022 Quentin Garrido, Yubei Chen, Adrien Bardes, Laurent Najman, Yann Lecun

Recent approaches in self-supervised learning of image representations can be categorized into different families of methods and, in particular, can be divided into contrastive and non-contrastive approaches.

Self-Supervised Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.