Search Results for author: Thomas Lucas

Found 17 papers, 6 papers with code

T2LM: Long-Term 3D Human Motion Generation from Multiple Sentences

no code implementations2 Jun 2024 Taeryung Lee, Fabien Baradel, Thomas Lucas, Kyoung Mu Lee, Gregory Rogez

To address these issues, we introduce simple yet effective T2LM, a continuous long-term generation framework that can be trained without sequential data.

Action Generation Decoder

Cross-view and Cross-pose Completion for 3D Human Understanding

no code implementations CVPR 2024 Matthieu Armando, Salma Galaaoui, Fabien Baradel, Thomas Lucas, Vincent Leroy, Romain Brégier, Philippe Weinzaepfel, Grégory Rogez

Human perception and understanding is a major domain of computer vision which, like many other vision subdomains recently, stands to gain from the use of large models pre-trained on large datasets.

Human Mesh Recovery Self-Supervised Learning

Win-Win: Training High-Resolution Vision Transformers from Two Windows

no code implementations1 Oct 2023 Vincent Leroy, Jerome Revaud, Thomas Lucas, Philippe Weinzaepfel

It is 4 times faster to train than a full-resolution network, and it is straightforward to use at test time compared to existing approaches.

Depth Estimation Depth Prediction +2

CroCo v2: Improved Cross-view Completion Pre-training for Stereo Matching and Optical Flow

1 code implementation ICCV 2023 Philippe Weinzaepfel, Thomas Lucas, Vincent Leroy, Yohann Cabon, Vaibhav Arora, Romain Brégier, Gabriela Csurka, Leonid Antsfeld, Boris Chidlovskii, Jérôme Revaud

Despite impressive performance for high-level downstream tasks, self-supervised pre-training methods have not yet fully delivered on dense geometric vision tasks such as stereo matching or optical flow.

Optical Flow Estimation Position +2

PoseGPT: Quantization-based 3D Human Motion Generation and Forecasting

1 code implementation19 Oct 2022 Thomas Lucas, Fabien Baradel, Philippe Weinzaepfel, Grégory Rogez

The discrete and compressed nature of the latent space allows the GPT-like model to focus on long-range signal, as it removes low-level redundancy in the input signal.

Human-Object Interaction Detection Quantization

CroCo: Self-Supervised Pre-training for 3D Vision Tasks by Cross-View Completion

1 code implementation19 Oct 2022 Philippe Weinzaepfel, Vincent Leroy, Thomas Lucas, Romain Brégier, Yohann Cabon, Vaibhav Arora, Leonid Antsfeld, Boris Chidlovskii, Gabriela Csurka, Jérôme Revaud

More precisely, we propose the pretext task of cross-view completion where the first input image is partially masked, and this masked content has to be reconstructed from the visible content and the second image.

Camera Pose Estimation Depth Estimation +7

Learning Super-Features for Image Retrieval

1 code implementation ICLR 2022 Philippe Weinzaepfel, Thomas Lucas, Diane Larlus, Yannis Kalantidis

Second, they are typically trained with a global loss that only acts on top of an aggregation of local features; by contrast, testing is based on local feature matching, which creates a discrepancy between training and testing.

Image Retrieval Retrieval

Barely-Supervised Learning: Semi-Supervised Learning with very few labeled images

no code implementations22 Dec 2021 Thomas Lucas, Philippe Weinzaepfel, Gregory Rogez

We propose a method to leverage self-supervised methods that provides training signal in the absence of confident pseudo-labels.

Pseudo Label

Adaptive Density Estimation for Generative Models

no code implementations NeurIPS 2019 Thomas Lucas, Konstantin Shmelkov, Karteek Alahari, Cordelia Schmid, Jakob Verbeek

We show that our model significantly improves over existing hybrid models: offering GAN-like samples, IS and FID scores that are competitive with fully adversarial models, and improved likelihood scores.

Decoder Density Estimation

Coverage and Quality Driven Training of Generative Image Models

no code implementations27 Sep 2018 Thomas Lucas, Konstantin Shmelkov, Karteek Alahari, Cordelia Schmid, Jakob Verbeek

First, we propose a model that extends variational autoencoders by using deterministic invertible transformation layers to map samples from the decoder to the image space.

Decoder

Mixed batches and symmetric discriminators for GAN training

no code implementations ICML 2018 Thomas Lucas, Corentin Tallec, Jakob Verbeek, Yann Ollivier

We propose to feed the discriminator with mixed batches of true and fake samples, and train it to predict the ratio of true samples in the batch.

Auxiliary Guided Autoregressive Variational Autoencoders

no code implementations ICLR 2018 Thomas Lucas, Jakob Verbeek

Our contribution is a training procedure relying on an auxiliary loss function that controls which information is captured by the latent variables and what is left to the autoregressive decoder.

Decoder

Cannot find the paper you are looking for? You can Submit a new open access paper.