no code implementations • 18 Jan 2025 • Linrui Tian, Siqi Hu, Qi Wang, Bang Zhang, Liefeng Bo
In the first stage, we generate hand poses directly from audio input, leveraging the strong correlation between audio signals and hand movements.
no code implementations • 23 Jul 2024 • Ke Sun, Jian Cao, Qi Wang, Linrui Tian, Xindi Zhang, Lian Zhuo, Bang Zhang, Liefeng Bo, Wenbo Zhou, Weiming Zhang, Daiheng Gao
Specifically, these models struggle to maintain a balance between control and consistency when generating images for virtual clothing trials.
no code implementations • 27 Feb 2024 • Linrui Tian, Qi Wang, Bang Zhang, Liefeng Bo
In this work, we tackle the challenge of enhancing the realism and expressiveness in talking head video generation by focusing on the dynamic and nuanced relationship between audio cues and facial movements.
1 code implementation • ICCV 2023 • Lijun Li, Linrui Tian, Xindi Zhang, Qi Wang, Bang Zhang, Mengyuan Liu, Chen Chen
The current interacting hand (IH) datasets are relatively simplistic in terms of background and texture, with hand joints being annotated by a machine annotator, which may result in inaccuracies, and the diversity of pose distribution is limited.