1 code implementation • CVPR 2023 • Yuming Du, Robin Kips, Albert Pumarola, Sebastian Starke, Ali Thabet, Artsiom Sanakoyeu
A particular challenge is that only a sparse tracking signal is available from standalone HMDs (Head Mounted Devices), often limited to tracking the user's head and wrists.
no code implementations • 12 May 2022 • Robin Kips, Ruowei Jiang, Sileye Ba, Brendan Duke, Matthieu Perrot, Pietro Gori, Isabelle Bloch
In this paper we propose a novel framework based on deep learning to build a real-time inverse graphics encoder that learns to map a single example image into the parameter space of a given augmented reality rendering engine.
no code implementations • 8 Feb 2022 • Robin Kips, Panagiotis-Alexandros Bokaris, Matthieu Perrot, Pietro Gori, Isabelle Bloch
Since rendering realistic hair images requires path-tracing rendering, the conventional inverse graphics approach based on differentiable rendering is untractable.
no code implementations • 12 May 2021 • Robin Kips, Ruowei Jiang, Sileye Ba, Edmund Phung, Parham Aarabi, Pietro Gori, Matthieu Perrot, Isabelle Bloch
While makeup virtual-try-on is now widespread, parametrizing a computer graphics rendering engine for synthesizing images of a given cosmetics product remains a challenging task.
no code implementations • 12 Mar 2021 • Hugo Thimonier, Julien Despois, Robin Kips, Matthieu Perrot
When trying to independently apply image-trained algorithms to successive frames in videos, noxious flickering tends to appear.
no code implementations • 24 Aug 2020 • Robin Kips, Pietro Gori, Matthieu Perrot, Isabelle Bloch
While existing makeup style transfer models perform an image synthesis whose results cannot be explicitly controlled, the ability to modify makeup color continuously is a desirable property for virtual try-on applications.