Search Results for author: Stéphane Lathuilière

Found 23 papers, 10 papers with code

Click to Move: Controlling Video Generation with Sparse Motion

no code implementations19 Aug 2021 Pierfrancesco Ardino, Marco De Nadai, Bruno Lepri, Elisa Ricci, Stéphane Lathuilière

This paper introduces Click to Move (C2M), a novel framework for video generation where the user can control the motion of the synthesized video through mouse clicks specifying simple object trajectories of the key objects in the scene.

Video Generation

HEMP: High-order Entropy Minimization for neural network comPression

no code implementations12 Jul 2021 Enzo Tartaglione, Stéphane Lathuilière, Attilio Fiandrotti, Marco Cagnazzo, Marco Grangetto

We formulate the entropy of a quantized artificial neural network as a differentiable function that can be plugged as a regularization term into the cost function minimized by gradient descent.

Neural Network Compression Quantization

Ultra-low bitrate video conferencing using deep image animation

no code implementations1 Dec 2020 Goluck Konuko, Giuseppe Valenzise, Stéphane Lathuilière

In this work we propose a novel deep learning approach for ultra-low bitrate video compression for video conferencing applications.

Image Animation Video Compression

DR2S : Deep Regression with Region Selection for Camera Quality Evaluation

no code implementations21 Sep 2020 Marcelin Tworski, Stéphane Lathuilière, Salim Belkarfa, Attilio Fiandrotti, Marco Cagnazzo

In this work, we tackle the problem of estimating a camera capability to preserve fine texture details at a given lighting condition.

Learning to Cluster under Domain Shift

1 code implementation ECCV 2020 Willi Menapace, Stéphane Lathuilière, Elisa Ricci

While unsupervised domain adaptation methods based on deep architectures have achieved remarkable success in many computer vision tasks, they rely on a strong assumption, i. e. labeled source data must be available.

Deep Clustering Unsupervised Domain Adaptation

Online Continual Learning under Extreme Memory Constraints

1 code implementation ECCV 2020 Enrico Fini, Stéphane Lathuilière, Enver Sangineto, Moin Nabi, Elisa Ricci

Continual Learning (CL) aims to develop agents emulating the human ability to sequentially learn new tasks while being able to retain knowledge obtained from past experiences.

Continual Learning

Motion-supervised Co-Part Segmentation

1 code implementation7 Apr 2020 Aliaksandr Siarohin, Subhankar Roy, Stéphane Lathuilière, Sergey Tulyakov, Elisa Ricci, Nicu Sebe

To overcome this limitation, we propose a self-supervised deep learning method for co-part segmentation.

Progressive Fusion for Unsupervised Binocular Depth Estimation using Cycled Networks

1 code implementation17 Sep 2019 Andrea Pilzer, Stéphane Lathuilière, Dan Xu, Mihai Marian Puscas, Elisa Ricci, Nicu Sebe

Extensive experiments on the publicly available datasets KITTI, Cityscapes and ApolloScape demonstrate the effectiveness of the proposed model which is competitive with other unsupervised deep learning methods for depth prediction.

Data Augmentation Monocular Depth Estimation +1

Budget-Aware Adapters for Multi-Domain Learning

no code implementations ICCV 2019 Rodrigo Berriel, Stéphane Lathuilière, Moin Nabi, Tassilo Klein, Thiago Oliveira-Santos, Nicu Sebe, Elisa Ricci

To implement this idea we derive specialized deep models for each domain by adapting a pre-trained architecture but, differently from other methods, we propose a novel strategy to automatically adjust the computational complexity of the network.

Attention-based Fusion for Multi-source Human Image Generation

no code implementations7 May 2019 Stéphane Lathuilière, Enver Sangineto, Aliaksandr Siarohin, Nicu Sebe

We present a generalization of the person-image generation task, in which a human image is generated conditioned on a target pose and a set X of source appearance images.

Image Generation

Appearance and Pose-Conditioned Human Image Generation using Deformable GANs

1 code implementation30 Apr 2019 Aliaksandr Siarohin, Stéphane Lathuilière, Enver Sangineto, Nicu Sebe

Specifically, given an image xa of a person and a target pose P(xb), extracted from a different image xb, we synthesize a new image of that person in pose P(xb), while preserving the visual details in xa.

Data Augmentation Image Generation +1

Online Adaptation through Meta-Learning for Stereo Depth Estimation

no code implementations17 Apr 2019 Zhen-Yu Zhang, Stéphane Lathuilière, Andrea Pilzer, Nicu Sebe, Elisa Ricci, Jian Yang

Our proposal is evaluated on the wellestablished KITTI dataset, where we show that our online method is competitive withstate of the art algorithms trained in a batch setting.

Meta-Learning Stereo Depth Estimation

Refine and Distill: Exploiting Cycle-Inconsistency and Knowledge Distillation for Unsupervised Monocular Depth Estimation

no code implementations CVPR 2019 Andrea Pilzer, Stéphane Lathuilière, Nicu Sebe, Elisa Ricci

Therefore, recent works have proposed deep architectures for addressing the monocular depth prediction task as a reconstruction problem, thus avoiding the need of collecting ground-truth depth.

Knowledge Distillation Monocular Depth Estimation

Extended Gaze Following: Detecting Objects in Videos Beyond the Camera Field of View

no code implementations28 Feb 2019 Benoit Massé, Stéphane Lathuilière, Pablo Mesejo, Radu Horaud

In this paper we address the problems of detecting objects of interest in a video and of estimating their locations, solely from the gaze directions of people present in the video.

A Comprehensive Analysis of Deep Regression

2 code implementations22 Mar 2018 Stéphane Lathuilière, Pablo Mesejo, Xavier Alameda-Pineda, Radu Horaud

Deep learning revolutionized data science, and recently its popularity has grown exponentially, as did the amount of papers employing deep networks.

Pose Estimation

Neural Network Based Reinforcement Learning for Audio-Visual Gaze Control in Human-Robot Interaction

no code implementations18 Nov 2017 Stéphane Lathuilière, Benoit Massé, Pablo Mesejo, Radu Horaud

Our approach enables a robot to learn and to adapt its gaze control strategy for human-robot interaction neither with the use of external sensors nor with human supervision.

Human robot interaction Q-Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.