Search Results for author: Shenhan Qian

Found 8 papers, 5 papers with code

GaussianAvatars: Photorealistic Head Avatars with Rigged 3D Gaussians

1 code implementation4 Dec 2023 Shenhan Qian, Tobias Kirschstein, Liam Schoneveld, Davide Davoli, Simon Giebenhain, Matthias Nießner

We introduce GaussianAvatars, a new method to create photorealistic head avatars that are fully controllable in terms of expression, pose, and viewpoint.

Face Model

NeRSemble: Multi-view Radiance Field Reconstruction of Human Heads

no code implementations4 May 2023 Tobias Kirschstein, Shenhan Qian, Simon Giebenhain, Tim Walter, Matthias Nießner

We focus on reconstructing high-fidelity radiance fields of human heads, capturing their animations over time, and synthesizing re-renderings from novel viewpoints at arbitrary time steps.

Dual-Space NeRF: Learning Animatable Avatars and Scene Lighting in Separate Spaces

1 code implementation31 Aug 2022 YiHao Zhi, Shenhan Qian, Xinhao Yan, Shenghua Gao

Previous methods alleviate the inconsistency of lighting by learning a per-frame embedding, but this operation does not generalize to unseen poses.

UNIF: United Neural Implicit Functions for Clothed Human Reconstruction and Animation

1 code implementation20 Jul 2022 Shenhan Qian, Jiale Xu, Ziwei Liu, Liqian Ma, Shenghua Gao

We propose united implicit functions (UNIF), a part-based method for clothed human reconstruction and animation with raw scans and skeletons as the input.

Position

Speech Drives Templates: Co-Speech Gesture Synthesis with Learned Templates

1 code implementation ICCV 2021 Shenhan Qian, Zhi Tu, YiHao Zhi, Wen Liu, Shenghua Gao

Co-speech gesture generation is to synthesize a gesture sequence that not only looks real but also matches with the input speech audio.

Gesture Generation

Learning to Recommend Frame for Interactive Video Object Segmentation in the Wild

1 code implementation CVPR 2021 Zhaoyuan Yin, Jia Zheng, Weixin Luo, Shenhan Qian, Hanling Zhang, Shenghua Gao

This paper proposes a framework for the interactive video object segmentation (VOS) in the wild where users can choose some frames for annotations iteratively.

Interactive Video Object Segmentation

Cannot find the paper you are looking for? You can Submit a new open access paper.