Search Results for author: Haozhe Wu

Found 8 papers, 6 papers with code

Versatile Face Animator: Driving Arbitrary 3D Facial Avatar in RGBD Space

1 code implementation11 Aug 2023 Haoyu Wang, Haozhe Wu, Junliang Xing, Jia Jia

Creating realistic 3D facial animation is crucial for various applications in the movie production and gaming industry, especially with the burgeoning demand in the metaverse.

motion retargeting Optical Flow Estimation

Speech-Driven 3D Face Animation with Composite and Regional Facial Movements

1 code implementation10 Aug 2023 Haozhe Wu, Songtao Zhou, Jia Jia, Junliang Xing, Qi Wen, Xiang Wen

This paper emphasizes the importance of considering both the composite and regional natures of facial movements in speech-driven 3D face animation.

3D Face Animation

Shuffled Autoregression For Motion Interpolation

no code implementations10 Jun 2023 Shuo Huang, Jia Jia, Zongxin Yang, Wei Wang, Haozhe Wu, Yi Yang, Junliang Xing

However, motion interpolation is a more complex problem that takes isolated poses (e. g., only one start pose and one end pose) as input.

Motion Interpolation

MMFace4D: A Large-Scale Multi-Modal 4D Face Dataset for Audio-Driven 3D Face Animation

1 code implementation17 Mar 2023 Haozhe Wu, Jia Jia, Junliang Xing, Hongwei Xu, Xiangyuan Wang, Jelo Wang

Audio-Driven Face Animation is an eagerly anticipated technique for applications such as VR/AR, games, and movie making.

3D Face Animation

Imitating Arbitrary Talking Style for Realistic Audio-DrivenTalking Face Synthesis

1 code implementation30 Oct 2021 Haozhe Wu, Jia Jia, Haoyu Wang, Yishun Dou, Chao Duan, Qingshan Deng

Due to such huge differences between different styles, it is necessary to incorporate the talking style into audio-driven talking face synthesis framework.

Face Generation

ChoreoNet: Towards Music to Dance Synthesis with Choreographic Action Unit

no code implementations16 Sep 2020 Zijie Ye, Haozhe Wu, Jia Jia, Yaohua Bu, Wei Chen, Fanbo Meng, Yan-Feng Wang

Meanwhile, human choreographers design dance motions from music in a two-stage manner: they firstly devise multiple choreographic dance units (CAUs), each with a series of dance motions, and then arrange the CAU sequence according to the rhythm, melody and emotion of the music.

Mining Unfollow Behavior in Large-Scale Online Social Networks via Spatial-Temporal Interaction

1 code implementation17 Nov 2019 Haozhe Wu, Zhiyuan Hu, Jia Jia, Yaohua Bu, Xiangnan He, Tat-Seng Chua

Next, we define user's attributes as two categories: spatial attributes (e. g., social role of user) and temporal attributes (e. g., post content of user).


Cannot find the paper you are looking for? You can Submit a new open access paper.