1 code implementation • 24 May 2024 • Huy V. Vo, Vasil Khalidov, Timothée Darcet, Théo Moutakanni, Nikita Smetanin, Marc Szafraniec, Hugo Touvron, Camille Couprie, Maxime Oquab, Armand Joulin, Hervé Jégou, Patrick Labatut, Piotr Bojanowski
This manual process has some limitations similar to those encountered in supervised learning, e. g., the crowd-sourced selection of data is costly and time-consuming, preventing scaling the dataset size.
no code implementations • 18 Mar 2024 • François Porcher, Camille Couprie, Marc Szafraniec, Jakob Verbeek
Despite the availability of large datasets for tasks like image classification and image-text alignment, labeled data for more complex recognition tasks, such as detection and segmentation, is less abundant.
no code implementations • CVPR 2024 • Tariq Berrada, Jakob Verbeek, Camille Couprie, Karteek Alahari
Semantic image synthesis, i. e., generating images from user-provided semantic label maps, is an important conditional image generation task as it allows to control both the content as well as the spatial layout of generated images.
1 code implementation • 3 Aug 2023 • Tariq Berrada, Camille Couprie, Karteek Alahari, Jakob Verbeek
Although instance segmentation methods have improved considerably, the dominant paradigm is to rely on fully-annotated training images, which are tedious to obtain.
1 code implementation • 14 Apr 2023 • Jamie Tolan, Hung-I Yang, Ben Nosarzewski, Guillaume Couairon, Huy Vo, John Brandt, Justine Spore, Sayantan Majumdar, Daniel Haziza, Janaki Vamaraju, Theo Moutakanni, Piotr Bojanowski, Tracy Johns, Brian White, Tobias Tiecke, Camille Couprie
The maps are generated by the extraction of features from a self-supervised model trained on Maxar imagery from 2017 to 2020, and the training of a dense prediction decoder against aerial lidar maps.
no code implementations • 25 Nov 2022 • Marlène Careil, Stéphane Lathuilière, Camille Couprie, Jakob Verbeek
To allow for more control, image synthesis can be conditioned on semantic segmentation maps that instruct the generator the position of objects in the image.
no code implementations • 16 Mar 2022 • Maxime Oquab, Daniel Haziza, Ludovic Schwartz, Tao Xu, Katayoun Zand, Rui Wang, Peirong Liu, Camille Couprie
As the quality of few shot facial animation from landmarks increases, new applications become possible, such as ultra low bandwidth video chat compression with a high degree of realism.
no code implementations • 1 Dec 2020 • Maxime Oquab, Pierre Stock, Oran Gafni, Daniel Haziza, Tao Xu, Peizhao Zhang, Onur Celebi, Yana Hasson, Patrick Labatut, Bobo Bose-Kolanu, Thibault Peyronel, Camille Couprie
To unlock video chat for hundreds of millions of people hindered by poor connectivity or unaffordable data costs, we propose to authentically reconstruct faces on the receiver's device using facial landmarks extracted at the sender's side and transmitted over the network.
1 code implementation • 25 Sep 2020 • Baptiste Roziere, Nathanal Carraz Rakotonirina, Vlad Hosu, Andry Rasoanaivo, Hanhe Lin, Camille Couprie, Olivier Teytaud
More generally, our approach can be used to optimize any method based on noise injection.
no code implementations • ECCV 2020 • Othman Sbai, Camille Couprie, Mathieu Aubry
In this paper, we systematically study the effect of variations in the training data by evaluating deep features trained on different image sets in a few-shot classification setting.
1 code implementation • 17 Jun 2019 • Baptiste Rozière, Morgane Riviere, Olivier Teytaud, Jérémy Rapin, Yann Lecun, Camille Couprie
We design a simple optimization method to find the optimal latent parameters corresponding to the closest generation to any input inspirational image.
no code implementations • ICLR 2019 • Mohamed Elfeki, Camille Couprie, Mohamed Elhoseiny
Embedded in an adversarial training and variational autoencoder, our Generative DPP approach shows a consistent resistance to mode-collapse on a wide-variety of synthetic data and natural image datasets including MNIST, CIFAR10, and CelebA, while outperforming state-of-the-art methods for data-efficiency, convergence-time, and generation quality.
no code implementations • 13 Dec 2018 • Othman Sbai, Camille Couprie, Mathieu Aubry
Deep image generation is becoming a tool to enhance artists and designers creativity potential.
4 code implementations • 30 Nov 2018 • Mohamed Elfeki, Camille Couprie, Morgane Riviere, Mohamed Elhoseiny
Generative models have proven to be an outstanding tool for representing high-dimensional probability distributions and generating realistic-looking images.
no code implementations • CVPR 2018 • Siddhartha Chandra, Camille Couprie, Iasonas Kokkinos
In this work we introduce a time- and memory-efficient method for structured prediction that couples neuron decisions across both space at time.
Ranked #9 on Semantic Segmentation on CamVid
1 code implementation • 3 Apr 2018 • Othman Sbai, Mohamed Elhoseiny, Antoine Bordes, Yann Lecun, Camille Couprie
Can an algorithm create original and compelling fashion designs to serve as an inspirational assistant?
1 code implementation • ECCV 2018 • Pauline Luc, Camille Couprie, Yann Lecun, Jakob Verbeek
We apply the "detection head'" of Mask R-CNN on the predicted features to produce the instance segmentation of future frames.
2 code implementations • ICCV 2017 • Pauline Luc, Natalia Neverova, Camille Couprie, Jakob Verbeek, Yann Lecun
The ability to predict and therefore to anticipate the future is an important attribute of intelligence.
no code implementations • 25 Feb 2017 • Camille Couprie, Laurent Duval, Maxime Moreaud, Sophie Hénon, Mélinda Tebib, Vincent Souchon
Comprehensive Two dimensional gas chromatography (GCxGC) plays a central role into the elucidation of complex samples.
1 code implementation • 25 Nov 2016 • Pauline Luc, Camille Couprie, Soumith Chintala, Jakob Verbeek
Adversarial training has been shown to produce state of the art results for generative image modeling.
5 code implementations • 17 Nov 2015 • Michael Mathieu, Camille Couprie, Yann Lecun
Learning to predict future images from a video sequence involves the construction of an internal representation that models the image evolution accurately, and therefore, to some degree, its content and dynamics.