no code implementations • 20 Feb 2025 • Vlad Sobal, Wancong Zhang, Kynghyun Cho, Randall Balestriero, Tim G. J. Rudner, Yann Lecun
On the control side, we train a latent dynamics model using the Joint Embedding Predictive Architecture (JEPA) and use it for planning.
no code implementations • 25 Jul 2024 • Vlad Sobal, Mark Ibrahim, Randall Balestriero, Vivien Cabannes, Diane Bouchacourt, Pietro Astolfi, Kyunghyun Cho, Yann Lecun
Based on this observation, we revise the standard contrastive loss to explicitly encode how a sample relates to others.
no code implementations • 28 May 2024 • Nicklas Hansen, Jyothir S V, Vlad Sobal, Yann Lecun, Xiaolong Wang, Hao Su
Whole-body control for humanoids is challenging due to the high-dimensional nature of the problem, coupled with the inherent instability of a bipedal morphology.
no code implementations • 28 Dec 2023 • Jyothir S V, Siddhartha Jalagam, Yann Lecun, Vlad Sobal
The enduring challenge in the field of artificial intelligence has been the control of systems to achieve desired behaviours.
no code implementations • 24 Apr 2023 • Randall Balestriero, Mark Ibrahim, Vlad Sobal, Ari Morcos, Shashank Shekhar, Tom Goldstein, Florian Bordes, Adrien Bardes, Gregoire Mialon, Yuandong Tian, Avi Schwarzschild, Andrew Gordon Wilson, Jonas Geiping, Quentin Garrido, Pierre Fernandez, Amir Bar, Hamed Pirsiavash, Yann Lecun, Micah Goldblum
Self-supervised learning, dubbed the dark matter of intelligence, is a promising path to advance machine learning.
1 code implementation • 20 Nov 2022 • Vlad Sobal, Jyothir S V, Siddhartha Jalagam, Nicolas Carion, Kyunghyun Cho, Yann Lecun
Many common methods for learning a world model for pixel-based environments use generative architectures trained with pixel-level reconstruction objectives.
1 code implementation • 25 Aug 2022 • Wancong Zhang, Anthony GX-Chen, Vlad Sobal, Yann Lecun, Nicolas Carion
Unsupervised visual representation learning offers the opportunity to leverage large corpora of unlabeled trajectories to form useful visual representations, which can benefit the training of reinforcement learning (RL) algorithms.