Search Results for author: Robin Kips

Found 6 papers, 1 papers with code

Avatars Grow Legs: Generating Smooth Human Motion from Sparse Tracking Inputs with Diffusion Model

1 code implementation CVPR 2023 Yuming Du, Robin Kips, Albert Pumarola, Sebastian Starke, Ali Thabet, Artsiom Sanakoyeu

A particular challenge is that only a sparse tracking signal is available from standalone HMDs (Head Mounted Devices), often limited to tracking the user's head and wrists.

Real-time Virtual-Try-On from a Single Example Image through Deep Inverse Graphics and Learned Differentiable Renderers

no code implementations12 May 2022 Robin Kips, Ruowei Jiang, Sileye Ba, Brendan Duke, Matthieu Perrot, Pietro Gori, Isabelle Bloch

In this paper we propose a novel framework based on deep learning to build a real-time inverse graphics encoder that learns to map a single example image into the parameter space of a given augmented reality rendering engine.

Neural Rendering Self-Supervised Learning +1

Hair Color Digitization through Imaging and Deep Inverse Graphics

no code implementations8 Feb 2022 Robin Kips, Panagiotis-Alexandros Bokaris, Matthieu Perrot, Pietro Gori, Isabelle Bloch

Since rendering realistic hair images requires path-tracing rendering, the conventional inverse graphics approach based on differentiable rendering is untractable.

Deep Graphics Encoder for Real-Time Video Makeup Synthesis from Example

no code implementations12 May 2021 Robin Kips, Ruowei Jiang, Sileye Ba, Edmund Phung, Parham Aarabi, Pietro Gori, Matthieu Perrot, Isabelle Bloch

While makeup virtual-try-on is now widespread, parametrizing a computer graphics rendering engine for synthesizing images of a given cosmetics product remains a challenging task.

Virtual Try-on

Learning Long-Term Style-Preserving Blind Video Temporal Consistency

no code implementations12 Mar 2021 Hugo Thimonier, Julien Despois, Robin Kips, Matthieu Perrot

When trying to independently apply image-trained algorithms to successive frames in videos, noxious flickering tends to appear.

Image Manipulation Style Transfer +2

CA-GAN: Weakly Supervised Color Aware GAN for Controllable Makeup Transfer

no code implementations24 Aug 2020 Robin Kips, Pietro Gori, Matthieu Perrot, Isabelle Bloch

While existing makeup style transfer models perform an image synthesis whose results cannot be explicitly controlled, the ability to modify makeup color continuously is a desirable property for virtual try-on applications.

Attribute Image Generation +3

Cannot find the paper you are looking for? You can Submit a new open access paper.