Search Results for author: Linrui Tian

Found 4 papers, 1 papers with code

EMO2: End-Effector Guided Audio-Driven Avatar Video Generation

no code implementations18 Jan 2025 Linrui Tian, Siqi Hu, Qi Wang, Bang Zhang, Liefeng Bo

In the first stage, we generate hand poses directly from audio input, leveraging the strong correlation between audio signals and hand movements.

Gesture Generation Video Generation

OutfitAnyone: Ultra-high Quality Virtual Try-On for Any Clothing and Any Person

no code implementations23 Jul 2024 Ke Sun, Jian Cao, Qi Wang, Linrui Tian, Xindi Zhang, Lian Zhuo, Bang Zhang, Liefeng Bo, Wenbo Zhou, Weiming Zhang, Daiheng Gao

Specifically, these models struggle to maintain a balance between control and consistency when generating images for virtual clothing trials.

Virtual Try-on

EMO: Emote Portrait Alive -- Generating Expressive Portrait Videos with Audio2Video Diffusion Model under Weak Conditions

no code implementations27 Feb 2024 Linrui Tian, Qi Wang, Bang Zhang, Liefeng Bo

In this work, we tackle the challenge of enhancing the realism and expressiveness in talking head video generation by focusing on the dynamic and nuanced relationship between audio cues and facial movements.

Video Generation

RenderIH: A Large-scale Synthetic Dataset for 3D Interacting Hand Pose Estimation

1 code implementation ICCV 2023 Lijun Li, Linrui Tian, Xindi Zhang, Qi Wang, Bang Zhang, Mengyuan Liu, Chen Chen

The current interacting hand (IH) datasets are relatively simplistic in terms of background and texture, with hand joints being annotated by a machine annotator, which may result in inaccuracies, and the diversity of pose distribution is limited.

3D Interacting Hand Pose Estimation Diversity +1

Cannot find the paper you are looking for? You can Submit a new open access paper.