Search Results for author: Thomas Lucas

Found 12 papers, 5 papers with code

Win-Win: Training High-Resolution Vision Transformers from Two Windows

no code implementations1 Oct 2023 Vincent Leroy, Jerome Revaud, Thomas Lucas, Philippe Weinzaepfel

In this paper, we propose a novel strategy for efficient training and inference of high-resolution vision transformers: the key principle is to mask out most of the high-resolution inputs during training, keeping only N random windows.

PoseScript: 3D Human Poses from Natural Language

1 code implementation21 Oct 2022 Ginger Delmas, Philippe Weinzaepfel, Thomas Lucas, Francesc Moreno-Noguer, Grégory Rogez

This process extracts low-level pose information -- the posecodes -- using a set of simple but generic rules on the 3D keypoints.

Cross-Modal Retrieval Image Captioning +3

CroCo: Self-Supervised Pre-training for 3D Vision Tasks by Cross-View Completion

1 code implementation19 Oct 2022 Philippe Weinzaepfel, Vincent Leroy, Thomas Lucas, Romain Brégier, Yohann Cabon, Vaibhav Arora, Leonid Antsfeld, Boris Chidlovskii, Gabriela Csurka, Jérôme Revaud

More precisely, we propose the pretext task of cross-view completion where the first input image is partially masked, and this masked content has to be reconstructed from the visible content and the second image.

Depth Estimation Depth Prediction +6

PoseGPT: Quantization-based 3D Human Motion Generation and Forecasting

1 code implementation19 Oct 2022 Thomas Lucas, Fabien Baradel, Philippe Weinzaepfel, Grégory Rogez

The discrete and compressed nature of the latent space allows the GPT-like model to focus on long-range signal, as it removes low-level redundancy in the input signal.

Human-Object Interaction Detection Quantization

Learning Super-Features for Image Retrieval

1 code implementation ICLR 2022 Philippe Weinzaepfel, Thomas Lucas, Diane Larlus, Yannis Kalantidis

Second, they are typically trained with a global loss that only acts on top of an aggregation of local features; by contrast, testing is based on local feature matching, which creates a discrepancy between training and testing.

Image Retrieval Retrieval

Barely-Supervised Learning: Semi-Supervised Learning with very few labeled images

no code implementations22 Dec 2021 Thomas Lucas, Philippe Weinzaepfel, Gregory Rogez

We propose a method to leverage self-supervised methods that provides training signal in the absence of confident pseudo-labels.

Pseudo Label

Adaptive Density Estimation for Generative Models

no code implementations NeurIPS 2019 Thomas Lucas, Konstantin Shmelkov, Karteek Alahari, Cordelia Schmid, Jakob Verbeek

We show that our model significantly improves over existing hybrid models: offering GAN-like samples, IS and FID scores that are competitive with fully adversarial models, and improved likelihood scores.

Density Estimation

Coverage and Quality Driven Training of Generative Image Models

no code implementations27 Sep 2018 Thomas Lucas, Konstantin Shmelkov, Karteek Alahari, Cordelia Schmid, Jakob Verbeek

First, we propose a model that extends variational autoencoders by using deterministic invertible transformation layers to map samples from the decoder to the image space.

Mixed batches and symmetric discriminators for GAN training

no code implementations ICML 2018 Thomas Lucas, Corentin Tallec, Jakob Verbeek, Yann Ollivier

We propose to feed the discriminator with mixed batches of true and fake samples, and train it to predict the ratio of true samples in the batch.

Auxiliary Guided Autoregressive Variational Autoencoders

no code implementations ICLR 2018 Thomas Lucas, Jakob Verbeek

Our contribution is a training procedure relying on an auxiliary loss function that controls which information is captured by the latent variables and what is left to the autoregressive decoder.

Cannot find the paper you are looking for? You can Submit a new open access paper.