Search Results for author: Quentin Garrido

Found 13 papers, 6 papers with code

Intuitive physics understanding emerges from self-supervised pretraining on natural videos

1 code implementation17 Feb 2025 Quentin Garrido, Nicolas Ballas, Mahmoud Assran, Adrien Bardes, Laurent Najman, Michael Rabbat, Emmanuel Dupoux, Yann Lecun

We investigate the emergence of intuitive physics understanding in general-purpose deep neural network models trained to predict masked regions in natural videos.

Video Prediction

UniBench: Visual Reasoning Requires Rethinking Vision-Language Beyond Scaling

1 code implementation9 Aug 2024 Haider Al-Tahan, Quentin Garrido, Randall Balestriero, Diane Bouchacourt, Caner Hazirbas, Mark Ibrahim

To facilitate a systematic evaluation of VLM progress, we introduce UniBench: a unified implementation of 50+ VLM benchmarks spanning a comprehensive range of carefully categorized capabilities from object recognition to spatial awareness, counting, and much more.

Language Modeling Language Modelling +2

Learning and Leveraging World Models in Visual Representation Learning

no code implementations1 Mar 2024 Quentin Garrido, Mahmoud Assran, Nicolas Ballas, Adrien Bardes, Laurent Najman, Yann Lecun

Joint-Embedding Predictive Architecture (JEPA) has emerged as a promising self-supervised approach that learns by leveraging a world model.

Representation Learning

Revisiting Feature Prediction for Learning Visual Representations from Video

1 code implementation arXiv preprint 2024 Adrien Bardes, Quentin Garrido, Jean Ponce, Xinlei Chen, Michael Rabbat, Yann Lecun, Mahmoud Assran, Nicolas Ballas

This paper explores feature prediction as a stand-alone objective for unsupervised learning from video and introduces V-JEPA, a collection of vision models trained solely using a feature prediction objective, without the use of pretrained image encoders, text, negative examples, reconstruction, or other sources of supervision.

Prediction

Self-Supervised Learning with Lie Symmetries for Partial Differential Equations

1 code implementation NeurIPS 2023 Grégoire Mialon, Quentin Garrido, Hannah Lawrence, Danyal Rehman, Yann Lecun, Bobak T. Kiani

Machine learning for differential equations paves the way for computationally efficient alternatives to numerical solvers, with potentially broad impacts in science and engineering.

Representation Learning Self-Supervised Learning

Self-supervised learning of Split Invariant Equivariant representations

1 code implementation14 Feb 2023 Quentin Garrido, Laurent Najman, Yann Lecun

We hope that both our introduced dataset and approach will enable learning richer representations without supervision in more complex scenarios.

Self-Supervised Learning

The Robustness Limits of SoTA Vision Models to Natural Variation

no code implementations24 Oct 2022 Mark Ibrahim, Quentin Garrido, Ari Morcos, Diane Bouchacourt

We study not only how robust recent state-of-the-art models are, but also the extent to which models can generalize variation in factors when they're present during training.

Diversity

RankMe: Assessing the downstream performance of pretrained self-supervised representations by their rank

no code implementations5 Oct 2022 Quentin Garrido, Randall Balestriero, Laurent Najman, Yann Lecun

Joint-Embedding Self Supervised Learning (JE-SSL) has seen a rapid development, with the emergence of many method variations but only few principled guidelines that would help practitioners to successfully deploy them.

Self-Supervised Learning

Guillotine Regularization: Why removing layers is needed to improve generalization in Self-Supervised Learning

no code implementations27 Jun 2022 Florian Bordes, Randall Balestriero, Quentin Garrido, Adrien Bardes, Pascal Vincent

This is a little vexing, as one would hope that the network layer at which invariance is explicitly enforced by the SSL criterion during training (the last projector layer) should be the one to use for best generalization performance downstream.

Self-Supervised Learning Transfer Learning

On the duality between contrastive and non-contrastive self-supervised learning

no code implementations3 Jun 2022 Quentin Garrido, Yubei Chen, Adrien Bardes, Laurent Najman, Yann Lecun

Recent approaches in self-supervised learning of image representations can be categorized into different families of methods and, in particular, can be divided into contrastive and non-contrastive approaches.

Self-Supervised Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.