Search Results for author: Adrien Bardes

Found 12 papers, 4 papers with code

Learning and Leveraging World Models in Visual Representation Learning

no code implementations1 Mar 2024 Quentin Garrido, Mahmoud Assran, Nicolas Ballas, Adrien Bardes, Laurent Najman, Yann Lecun

Joint-Embedding Predictive Architecture (JEPA) has emerged as a promising self-supervised approach that learns by leveraging a world model.

Representation Learning

Revisiting Feature Prediction for Learning Visual Representations from Video

1 code implementation arXiv preprint 2024 Adrien Bardes, Quentin Garrido, Jean Ponce, Xinlei Chen, Michael Rabbat, Yann Lecun, Mahmoud Assran, Nicolas Ballas

This paper explores feature prediction as a stand-alone objective for unsupervised learning from video and introduces V-JEPA, a collection of vision models trained solely using a feature prediction objective, without the use of pretrained image encoders, text, negative examples, reconstruction, or other sources of supervision.

Battle of the Backbones: A Large-Scale Comparison of Pretrained Models across Computer Vision Tasks

2 code implementations NeurIPS 2023 Micah Goldblum, Hossein Souri, Renkun Ni, Manli Shu, Viraj Prabhu, Gowthami Somepalli, Prithvijit Chattopadhyay, Mark Ibrahim, Adrien Bardes, Judy Hoffman, Rama Chellappa, Andrew Gordon Wilson, Tom Goldstein

Battle of the Backbones (BoB) makes this choice easier by benchmarking a diverse suite of pretrained models, including vision-language models, those trained via self-supervised learning, and the Stable Diffusion backbone, across a diverse set of computer vision tasks ranging from classification to object detection to OOD generalization and more.

Benchmarking object-detection +2

MC-JEPA: A Joint-Embedding Predictive Architecture for Self-Supervised Learning of Motion and Content Features

no code implementations24 Jul 2023 Adrien Bardes, Jean Ponce, Yann Lecun

Self-supervised learning of visual representations has been focusing on learning content features, which do not capture object motion or location, and focus on identifying and differentiating objects in images and videos.

Optical Flow Estimation Self-Supervised Learning +1

No Free Lunch in Self Supervised Representation Learning

no code implementations23 Apr 2023 Ihab Bendidi, Adrien Bardes, Ethan Cohen, Alexis Lamiable, Guillaume Bollot, Auguste Genovesio

In this work, we explore this relationship, its impact on a domain other than natural images, and show that designing the transformations can be viewed as a form of supervision.

Representation Learning

VICRegL: Self-Supervised Learning of Local Visual Features

3 code implementations4 Oct 2022 Adrien Bardes, Jean Ponce, Yann Lecun

Most recent self-supervised methods for learning image representations focus on either producing a global feature with invariance properties, or producing a set of local features.

Segmentation Self-Supervised Learning

Guillotine Regularization: Why removing layers is needed to improve generalization in Self-Supervised Learning

no code implementations27 Jun 2022 Florian Bordes, Randall Balestriero, Quentin Garrido, Adrien Bardes, Pascal Vincent

This is a little vexing, as one would hope that the network layer at which invariance is explicitly enforced by the SSL criterion during training (the last projector layer) should be the one to use for best generalization performance downstream.

Self-Supervised Learning Transfer Learning

On the duality between contrastive and non-contrastive self-supervised learning

no code implementations3 Jun 2022 Quentin Garrido, Yubei Chen, Adrien Bardes, Laurent Najman, Yann Lecun

Recent approaches in self-supervised learning of image representations can be categorized into different families of methods and, in particular, can be divided into contrastive and non-contrastive approaches.

Self-Supervised Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.