Search Results for author: Thomas Lucas

Found 7 papers, 1 papers with code

Learning Super-Features for Image Retrieval

1 code implementation ICLR 2022 Philippe Weinzaepfel, Thomas Lucas, Diane Larlus, Yannis Kalantidis

Second, they are typically trained with a global loss that only acts on top of an aggregation of local features; by contrast, testing is based on local feature matching, which creates a discrepancy between training and testing.

Image Retrieval

Barely-Supervised Learning: Semi-Supervised Learning with very few labeled images

no code implementations22 Dec 2021 Thomas Lucas, Philippe Weinzaepfel, Gregory Rogez

We propose a method to leverage self-supervised methods that provides training signal in the absence of confident pseudo-labels.

Adaptive Density Estimation for Generative Models

no code implementations NeurIPS 2019 Thomas Lucas, Konstantin Shmelkov, Karteek Alahari, Cordelia Schmid, Jakob Verbeek

We show that our model significantly improves over existing hybrid models: offering GAN-like samples, IS and FID scores that are competitive with fully adversarial models, and improved likelihood scores.

Density Estimation

Coverage and Quality Driven Training of Generative Image Models

no code implementations27 Sep 2018 Thomas Lucas, Konstantin Shmelkov, Karteek Alahari, Cordelia Schmid, Jakob Verbeek

First, we propose a model that extends variational autoencoders by using deterministic invertible transformation layers to map samples from the decoder to the image space.

Mixed batches and symmetric discriminators for GAN training

no code implementations ICML 2018 Thomas Lucas, Corentin Tallec, Jakob Verbeek, Yann Ollivier

We propose to feed the discriminator with mixed batches of true and fake samples, and train it to predict the ratio of true samples in the batch.

Auxiliary Guided Autoregressive Variational Autoencoders

no code implementations ICLR 2018 Thomas Lucas, Jakob Verbeek

Our contribution is a training procedure relying on an auxiliary loss function that controls which information is captured by the latent variables and what is left to the autoregressive decoder.

Cannot find the paper you are looking for? You can Submit a new open access paper.