no code implementations • 1 Oct 2023 • Vincent Leroy, Jerome Revaud, Thomas Lucas, Philippe Weinzaepfel
In this paper, we propose a novel strategy for efficient training and inference of high-resolution vision transformers: the key principle is to mask out most of the high-resolution inputs during training, keeping only N random windows.
1 code implementation • ICCV 2023 • Philippe Weinzaepfel, Thomas Lucas, Vincent Leroy, Yohann Cabon, Vaibhav Arora, Romain Brégier, Gabriela Csurka, Leonid Antsfeld, Boris Chidlovskii, Jérôme Revaud
Despite impressive performance for high-level downstream tasks, self-supervised pre-training methods have not yet fully delivered on dense geometric vision tasks such as stereo matching or optical flow.
Ranked #1 on
Optical Flow Estimation
on KITTI 2012
1 code implementation • 21 Oct 2022 • Ginger Delmas, Philippe Weinzaepfel, Thomas Lucas, Francesc Moreno-Noguer, Grégory Rogez
This process extracts low-level pose information -- the posecodes -- using a set of simple but generic rules on the 3D keypoints.
1 code implementation • 19 Oct 2022 • Philippe Weinzaepfel, Vincent Leroy, Thomas Lucas, Romain Brégier, Yohann Cabon, Vaibhav Arora, Leonid Antsfeld, Boris Chidlovskii, Gabriela Csurka, Jérôme Revaud
More precisely, we propose the pretext task of cross-view completion where the first input image is partially masked, and this masked content has to be reconstructed from the visible content and the second image.
1 code implementation • 19 Oct 2022 • Thomas Lucas, Fabien Baradel, Philippe Weinzaepfel, Grégory Rogez
The discrete and compressed nature of the latent space allows the GPT-like model to focus on long-range signal, as it removes low-level redundancy in the input signal.
1 code implementation • ICLR 2022 • Philippe Weinzaepfel, Thomas Lucas, Diane Larlus, Yannis Kalantidis
Second, they are typically trained with a global loss that only acts on top of an aggregation of local features; by contrast, testing is based on local feature matching, which creates a discrepancy between training and testing.
Ranked #3 on
Image Retrieval
on ROxford (Medium)
no code implementations • 22 Dec 2021 • Thomas Lucas, Philippe Weinzaepfel, Gregory Rogez
We propose a method to leverage self-supervised methods that provides training signal in the absence of confident pseudo-labels.
no code implementations • NeurIPS 2019 • Thomas Lucas, Konstantin Shmelkov, Karteek Alahari, Cordelia Schmid, Jakob Verbeek
We show that our model significantly improves over existing hybrid models: offering GAN-like samples, IS and FID scores that are competitive with fully adversarial models, and improved likelihood scores.
no code implementations • 27 Sep 2018 • Thomas Lucas, Konstantin Shmelkov, Karteek Alahari, Cordelia Schmid, Jakob Verbeek
First, we propose a model that extends variational autoencoders by using deterministic invertible transformation layers to map samples from the decoder to the image space.
no code implementations • ICML 2018 • Thomas Lucas, Corentin Tallec, Jakob Verbeek, Yann Ollivier
We propose to feed the discriminator with mixed batches of true and fake samples, and train it to predict the ratio of true samples in the batch.
no code implementations • ICLR 2018 • Thomas Lucas, Jakob Verbeek
Our contribution is a training procedure relying on an auxiliary loss function that controls which information is captured by the latent variables and what is left to the autoregressive decoder.
no code implementations • ICCV 2017 • Marco Pedersoli, Thomas Lucas, Cordelia Schmid, Jakob Verbeek
We propose "Areas of Attention", a novel attention-based model for automatic image captioning.