Search Results for author: Anqi Pang

Found 5 papers, 3 papers with code

Neural Free-Viewpoint Performance Rendering under Complex Human-object Interactions

no code implementations1 Aug 2021 Guoxing Sun, Xin Chen, Yizhang Chen, Anqi Pang, Pei Lin, Yuheng Jiang, Lan Xu, Jingya Wang, Jingyi Yu

In this paper, we propose a neural human performance capture and rendering system to generate both high-quality geometry and photo-realistic texture of both human and objects under challenging interaction scenarios in arbitrary novel views, from only sparse RGB streams.

4D reconstruction Dynamic Reconstruction +5

Few-shot Neural Human Performance Rendering from Sparse RGBD Videos

no code implementations14 Jul 2021 Anqi Pang, Xin Chen, Haimin Luo, Minye Wu, Jingyi Yu, Lan Xu

To fill this gap, in this paper we propose a few-shot neural human rendering approach (FNHR) from only sparse RGBD inputs, which exploits the temporal and spatial redundancy to generate photo-realistic free-view output of human activities.

Neural Rendering

SportsCap: Monocular 3D Human Motion Capture and Fine-grained Understanding in Challenging Sports Videos

1 code implementation23 Apr 2021 Xin Chen, Anqi Pang, Wei Yang, Yuexin Ma, Lan Xu, Jingyi Yu

In this paper, we propose SportsCap -- the first approach for simultaneously capturing 3D human motions and understanding fine-grained actions from monocular challenging sports video input.

Action Assessment Attribute +1

ChallenCap: Monocular 3D Capture of Challenging Human Performances using Multi-Modal References

2 code implementations CVPR 2021 Yannan He, Anqi Pang, Xin Chen, Han Liang, Minye Wu, Yuexin Ma, Lan Xu

We propose a hybrid motion inference stage with a generation network, which utilizes a temporal encoder-decoder to extract the motion details from the pair-wise sparse-view reference, as well as a motion discriminator to utilize the unpaired marker-based references to extract specific challenging motion characteristics in a data-driven manner.

TightCap: 3D Human Shape Capture with Clothing Tightness Field

1 code implementation4 Apr 2019 Xin Chen, Anqi Pang, Yang Wei, Lan Xui, Jingyi Yu

In this paper, we present TightCap, a data-driven scheme to capture both the human shape and dressed garments accurately with only a single 3D human scan, which enables numerous applications such as virtual try-on, biometrics and body evaluation.

Virtual Try-on

Cannot find the paper you are looking for? You can Submit a new open access paper.