Search Results for author: YiHao Zhi

Found 7 papers, 4 papers with code

MVHumanNet++: A Large-scale Dataset of Multi-view Daily Dressing Human Captures with Richer Annotations for 3D Human Digitization

no code implementations3 May 2025 Chenghong Li, Hongjie Liao, YiHao Zhi, Xihe Yang, Zhengwentai Sun, Jiahao Chang, Shuguang Cui, Xiaoguang Han

In this era, the success of large language models and text-to-image models can be attributed to the driving force of large-scale datasets.

Surfel-based Gaussian Inverse Rendering for Fast and Relightable Dynamic Human Reconstruction from Monocular Video

no code implementations21 Jul 2024 Yiqun Zhao, Chenming Wu, Binbin Huang, YiHao Zhi, Chen Zhao, Jingdong Wang, Shenghua Gao

Efficient and accurate reconstruction of a relightable, dynamic clothed human avatar from a monocular video is crucial for the entertainment industry.

Disentanglement Inverse Rendering

GauStudio: A Modular Framework for 3D Gaussian Splatting and Beyond

1 code implementation28 Mar 2024 Chongjie Ye, Yinyu Nie, Jiahao Chang, Yuantao Chen, YiHao Zhi, Xiaoguang Han

We present GauStudio, a novel modular framework for modeling 3D Gaussian Splatting (3DGS) to provide standardized, plug-and-play components for users to easily customize and implement a 3DGS pipeline.

3DGS Novel View Synthesis +1

LivelySpeaker: Towards Semantic-Aware Co-Speech Gesture Generation

1 code implementation ICCV 2023 YiHao Zhi, Xiaodong Cun, Xuelin Chen, Xi Shen, Wen Guo, Shaoli Huang, Shenghua Gao

While previous methods are able to generate speech rhythm-synchronized gestures, the semantic context of the speech is generally lacking in the gesticulations.

Gesture Generation Rhythm

Dual-Space NeRF: Learning Animatable Avatars and Scene Lighting in Separate Spaces

1 code implementation31 Aug 2022 YiHao Zhi, Shenhan Qian, Xinhao Yan, Shenghua Gao

Previous methods alleviate the inconsistency of lighting by learning a per-frame embedding, but this operation does not generalize to unseen poses.

NeRF

Speech Drives Templates: Co-Speech Gesture Synthesis with Learned Templates

1 code implementation ICCV 2021 Shenhan Qian, Zhi Tu, YiHao Zhi, Wen Liu, Shenghua Gao

Co-speech gesture generation is to synthesize a gesture sequence that not only looks real but also matches with the input speech audio.

Gesture Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.