Search Results for author: Qing Shuai

Found 16 papers, 8 papers with code

Motion Capture from Internet Videos

2 code implementations ECCV 2020 Junting Dong, Qing Shuai, Yuanqing Zhang, Xian Liu, Xiaowei Zhou, Hujun Bao

Therefore, we propose to capture human motion by jointly analyzing these Internet videos instead of using single videos separately.

Pose Estimation

Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans

3 code implementations CVPR 2021 Sida Peng, Yuanqing Zhang, Yinghao Xu, Qianqian Wang, Qing Shuai, Hujun Bao, Xiaowei Zhou

To this end, we propose Neural Body, a new human body representation which assumes that the learned neural representations at different frames share the same set of latent codes anchored to a deformable mesh, so that the observations across frames can be naturally integrated.

Novel View Synthesis Representation Learning

Reconstructing 3D Human Pose by Watching Humans in the Mirror

1 code implementation CVPR 2021 Qi Fang, Qing Shuai, Junting Dong, Hujun Bao, Xiaowei Zhou

In this paper, we introduce the new task of reconstructing 3D human pose from a single image in which we can see the person and the person's image through a mirror.

3D Pose Estimation

Animatable Neural Radiance Fields for Modeling Dynamic Human Bodies

1 code implementation ICCV 2021 Sida Peng, Junting Dong, Qianqian Wang, Shangzhan Zhang, Qing Shuai, Xiaowei Zhou, Hujun Bao

Moreover, the learned blend weight fields can be combined with input skeletal motions to generate new deformation fields to animate the human model.

Efficient Neural Radiance Fields for Interactive Free-viewpoint Video

no code implementations2 Dec 2021 Haotong Lin, Sida Peng, Zhen Xu, Yunzhi Yan, Qing Shuai, Hujun Bao, Xiaowei Zhou

We propose a novel scene representation, called ENeRF, for the fast creation of interactive free-viewpoint videos.

Depth Estimation Depth Prediction +1

Animatable Implicit Neural Representations for Creating Realistic Avatars from Videos

1 code implementation15 Mar 2022 Sida Peng, Zhen Xu, Junting Dong, Qianqian Wang, Shangzhan Zhang, Qing Shuai, Hujun Bao, Xiaowei Zhou

Some recent works have proposed to decompose a non-rigidly deforming scene into a canonical neural radiance field and a set of deformation fields that map observation-space points to the canonical space, thereby enabling them to learn the dynamic scene from images.

Reconstructing Hand-Held Objects from Monocular Video

no code implementations30 Nov 2022 Di Huang, Xiaopeng Ji, Xingyi He, Jiaming Sun, Tong He, Qing Shuai, Wanli Ouyang, Xiaowei Zhou

The key idea is that the hand motion naturally provides multiple views of the object and the motion can be reliably estimated by a hand pose tracker.

Hand Pose Estimation Object

Learning Analytical Posterior Probability for Human Mesh Recovery

1 code implementation CVPR 2023 Qi Fang, Kang Chen, Yinghui Fan, Qing Shuai, Jiefeng Li, Weidong Zhang

Despite various probabilistic methods for modeling the uncertainty and ambiguity in human mesh recovery, their overall precision is limited because existing formulations for joint rotations are either not constrained to SO(3) or difficult to learn for neural networks.

Human Mesh Recovery

iVS-Net: Learning Human View Synthesis from Internet Videos

no code implementations ICCV 2023 Junting Dong, Qi Fang, Tianshuo Yang, Qing Shuai, Chengyu Qiao, Sida Peng

However, these methods usually rely on limited multi-view images typically collected in the studio or commercial high-quality 3D scans for training, which heavily prohibits their generalization capability for in-the-wild images.

Representing Volumetric Videos as Dynamic MLP Maps

no code implementations CVPR 2023 Sida Peng, Yunzhi Yan, Qing Shuai, Hujun Bao, Xiaowei Zhou

This paper introduces a novel representation of volumetric videos for real-time view synthesis of dynamic scenes.

Learning Human Mesh Recovery in 3D Scenes

no code implementations CVPR 2023 Zehong Shen, Zhi Cen, Sida Peng, Qing Shuai, Hujun Bao, Xiaowei Zhou

We present a novel method for recovering the absolute pose and shape of a human in a pre-scanned scene given a single image.

Human Mesh Recovery

Dyn-E: Local Appearance Editing of Dynamic Neural Radiance Fields

no code implementations24 Jul 2023 Shangzhan Zhang, Sida Peng, Yinji ShenTu, Qing Shuai, Tianrun Chen, Kaicheng Yu, Hujun Bao, Xiaowei Zhou

We extensively evaluate our approach on various scenes and show that our approach achieves spatially and temporally consistent editing results.

EasyVolcap: Accelerating Neural Volumetric Video Research

1 code implementation11 Dec 2023 Zhen Xu, Tao Xie, Sida Peng, Haotong Lin, Qing Shuai, Zhiyuan Yu, Guangzhao He, Jiaming Sun, Hujun Bao, Xiaowei Zhou

Volumetric video is a technology that digitally records dynamic events such as artistic performances, sporting events, and remote conversations.

AniDress: Animatable Loose-Dressed Avatar from Sparse Views Using Garment Rigging Model

no code implementations27 Jan 2024 Beijia Chen, Yuefan Shen, Qing Shuai, Xiaowei Zhou, Kun Zhou, Youyi Zheng

In this paper, we introduce AniDress, a novel method for generating animatable human avatars in loose clothes using very sparse multi-view videos (4-8 in our setting).

Reconstructing Close Human Interactions from Multiple Views

1 code implementation29 Jan 2024 Qing Shuai, Zhiyuan Yu, Zhize Zhou, Lixin Fan, Haijun Yang, Can Yang, Xiaowei Zhou

This paper addresses the challenging task of reconstructing the poses of multiple individuals engaged in close interactions, captured by multiple calibrated cameras.

Pose Estimation

Cannot find the paper you are looking for? You can Submit a new open access paper.