no code implementations • 4 Apr 2022 • Liqian Ma, Lingjie Liu, Christian Theobalt, Luc van Gool
In addition, DDP is computationally more efficient than previous dense pose estimation methods, and it reduces jitters when applied to a video sequence, which is a problem plaguing the previous methods.
no code implementations • 20 Jan 2022 • Jian Wang, Lingjie Liu, Weipeng Xu, Kripasindhu Sarkar, Diogo Luvizon, Christian Theobalt
Specifically, we first generate pseudo labels for the EgoPW dataset with a spatio-temporal optimization method by incorporating the external-view supervision.
no code implementations • 9 Dec 2021 • Viktor Rudnev, Mohamed Elgharib, William Smith, Lingjie Liu, Vladislav Golyanik, Christian Theobalt
Photorealistic editing of outdoor scenes from photographs requires a profound understanding of the image formation process and an accurate estimation of the scene geometry, reflectance and illumination.
no code implementations • ICCV 2021 • Tao Hu, Kripasindhu Sarkar, Lingjie Liu, Matthias Zwicker, Christian Theobalt
We next combine the target pose image and the textures into a combined feature image, which is transformed into the output color image using a neural image translation network.
1 code implementation • ICLR 2022 • Jiatao Gu, Lingjie Liu, Peng Wang, Christian Theobalt
We perform volume rendering only to produce a low-resolution feature map and progressively apply upsampling in 2D to address the first issue.
1 code implementation • 28 Jul 2021 • YuAn Liu, Sida Peng, Lingjie Liu, Qianqian Wang, Peng Wang, Christian Theobalt, Xiaowei Zhou, Wenping Wang
On such a 3D point, these generalization methods will include inconsistent image features from invisible views, which interfere with the radiance field construction.
1 code implementation • 21 Jul 2021 • Runnan Chen, Yuexin Ma, Nenglun Chen, Lingjie Liu, Zhiming Cui, Yanhong Lin, Wenping Wang
Detecting 3D landmarks on cone-beam computed tomography (CBCT) is crucial to assessing and quantifying the anatomical abnormalities in 3D cephalometric analysis.
4 code implementations • NeurIPS 2021 • Peng Wang, Lingjie Liu, YuAn Liu, Christian Theobalt, Taku Komura, Wenping Wang
In NeuS, we propose to represent a surface as the zero-level set of a signed distance function (SDF) and develop a new volume rendering method to train a neural SDF representation.
no code implementations • 3 Jun 2021 • Lingjie Liu, Marc Habermann, Viktor Rudnev, Kripasindhu Sarkar, Jiatao Gu, Christian Theobalt
To address this problem, we utilize a coarse body model as the proxy to unwarp the surrounding 3D space into a canonical pose.
no code implementations • 28 May 2021 • Runnan Chen, Yuexin Ma, Lingjie Liu, Nenglun Chen, Zhiming Cui, Guodong Wei, Wenping Wang
The global shape constraint is the inherent property of anatomical landmarks that provides valuable guidance for more consistent pseudo labelling of the unlabeled data, which is ignored in the previously semi-supervised methods.
no code implementations • 4 May 2021 • Marc Habermann, Lingjie Liu, Weipeng Xu, Michael Zollhoefer, Gerard Pons-Moll, Christian Theobalt
We propose a deep videorealistic 3D human character model displaying highly realistic shape, motion, and dynamic appearance learned in a new weakly supervised way from multi-view imagery.
no code implementations • ICCV 2021 • Jian Wang, Lingjie Liu, Weipeng Xu, Kripasindhu Sarkar, Christian Theobalt
Furthermore, these methods suffer from limited accuracy and temporal instability due to ambiguities caused by the monocular setup and the severe occlusion in a strongly distorted egocentric perspective.
no code implementations • ICCV 2021 • Linjie Lyu, Marc Habermann, Lingjie Liu, Mallikarjun B R, Ayush Tewari, Christian Theobalt
Differentiable rendering has received increasing interest for image-based inverse problems.
1 code implementation • ICCV 2021 • Xiaoxiao Long, Cheng Lin, Lingjie Liu, Wei Li, Christian Theobalt, Ruigang Yang, Wenping Wang
We present a novel method for single image depth estimation using surface normal constraints.
no code implementations • 11 Mar 2021 • Kripasindhu Sarkar, Lingjie Liu, Vladislav Golyanik, Christian Theobalt
We address these limitations and present a generative model for images of dressed humans offering control over pose, local body part appearance and garment style.
no code implementations • 22 Feb 2021 • Kripasindhu Sarkar, Vladislav Golyanik, Lingjie Liu, Christian Theobalt
Photo-realistic re-rendering of a human from a single image with explicit control over body pose, shape and appearance enables a wide range of applications, such as human appearance transfer, virtual try-on, motion imitation, and novel view synthesis.
no code implementations • 13 Feb 2021 • Ikhsanul Habibie, Weipeng Xu, Dushyant Mehta, Lingjie Liu, Hans-Peter Seidel, Gerard Pons-Moll, Mohamed Elgharib, Christian Theobalt
We propose the first approach to automatically and jointly synthesize both the synchronous 3D conversational body and hand gestures, as well as 3D face and head animations, of a virtual character from speech input.
no code implementations • CVPR 2021 • Jae Shin Yoon, Lingjie Liu, Vladislav Golyanik, Kripasindhu Sarkar, Hyun Soo Park, Christian Theobalt
We present a new pose transfer method for synthesizing a human animation from a single image of a person controlled by a sequence of body poses.
no code implementations • CVPR 2021 • YuAn Liu, Lingjie Liu, Cheng Lin, Zhen Dong, Wenping Wang
We propose a novel formulation of fitting coherent motions with a smooth function on a graph of correspondences and show that this formulation allows a closed-form solution by graph Laplacian.
1 code implementation • CVPR 2021 • Xiaoxiao Long, Lingjie Liu, Wei Li, Christian Theobalt, Wenping Wang
We present a novel method for multi-view depth estimation from a single video, which is a critical task in various applications, such as perception, reconstruction and robot navigation.
1 code implementation • 22 Oct 2020 • Cheng Lin, Lingjie Liu, Changjian Li, Leif Kobbelt, Bin Wang, Shiqing Xin, Wenping Wang
Segmenting arbitrary 3D objects into constituent parts that are structurally meaningful is a fundamental problem encountered in a wide range of computer graphics applications.
1 code implementation • NeurIPS 2020 • Lingjie Liu, Jiatao Gu, Kyaw Zaw Lin, Tat-Seng Chua, Christian Theobalt
We also demonstrate several challenging tasks, including multi-scene learning, free-viewpoint rendering of a moving human, and large-scale scene rendering.
no code implementations • 7 May 2020 • Peng Wang, Lingjie Liu, Nenglun Chen, Hung-Kuo Chu, Christian Theobalt, Wenping Wang
We propose the first approach that simultaneously estimates camera motion and reconstructs the geometry of complex 3D thin structures in high quality from a color video captured by a handheld camera.
no code implementations • 13 Apr 2020 • Zhaoqi Su, Weilin Wan, Tao Yu, Lingjie Liu, Lu Fang, Wenping Wang, Yebin Liu
We introduce MulayCap, a novel human performance capture method using a monocular video camera without the need for pre-scanning.
1 code implementation • ECCV 2020 • Xiaoxiao Long, Lingjie Liu, Christian Theobalt, Wenping Wang
We present a new learning-based method for multi-frame depth estimation from a color video, which is a fundamental problem in scene understanding, robot navigation or handheld 3D reconstruction.
1 code implementation • CVPR 2020 • Nenglun Chen, Lingjie Liu, Zhiming Cui, Runnan Chen, Duygu Ceylan, Changhe Tu, Wenping Wang
The 3D structure points produced by our method encode the shape structure intrinsically and exhibit semantic consistency across all the shape instances with similar structures.
no code implementations • 14 Jan 2020 • Lingjie Liu, Weipeng Xu, Marc Habermann, Michael Zollhoefer, Florian Bernard, Hyeongwoo Kim, Wenping Wang, Christian Theobalt
In this paper, we propose a novel human video synthesis method that approaches these limiting factors by explicitly disentangling the learning of time-coherent fine-scale details from the embedding of the human in 2D screen space.
no code implementations • 11 Sep 2018 • Lingjie Liu, Weipeng Xu, Michael Zollhoefer, Hyeongwoo Kim, Florian Bernard, Marc Habermann, Wenping Wang, Christian Theobalt
In contrast to conventional human character rendering, we do not require the availability of a production-quality photo-realistic 3D model of the human, but instead rely on a video sequence in conjunction with a (medium-quality) controllable 3D template model of the person.