Search Results for author: Chenxu Zhang

Found 7 papers, 2 papers with code

Magic-Boost: Boost 3D Generation with Mutli-View Conditioned Diffusion

no code implementations9 Apr 2024 Fan Yang, Jianfeng Zhang, Yichun Shi, Bowen Chen, Chenxu Zhang, Huichao Zhang, Xiaofeng Yang, Jiashi Feng, Guosheng Lin

Benefiting from the rapid development of 2D diffusion models, 3D content creation has made significant progress recently.

3D Generation

Robust Active Speaker Detection in Noisy Environments

no code implementations27 Mar 2024 Siva Sai Nagender Vasireddy, Chenxu Zhang, Xiaohu Guo, Yapeng Tian

Experiments demonstrate that non-speech audio noises significantly impact ASD models, and our proposed approach improves ASD performance in noisy environments.

Speech Separation

Sora Generates Videos with Stunning Geometrical Consistency

no code implementations27 Feb 2024 XuanYi Li, Daquan Zhou, Chenxu Zhang, Shaodong Wei, Qibin Hou, Ming-Ming Cheng

We employ a method that transforms the generated videos into 3D models, leveraging the premise that the accuracy of 3D reconstruction is heavily contingent on the video quality.

3D Reconstruction Video Generation

AvatarStudio: High-fidelity and Animatable 3D Avatar Creation from Text

no code implementations29 Nov 2023 Jianfeng Zhang, Xuanmeng Zhang, Huichao Zhang, Jun Hao Liew, Chenxu Zhang, Yi Yang, Jiashi Feng

We study the problem of creating high-fidelity and animatable 3D avatars from only textual descriptions.

MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model

2 code implementations27 Nov 2023 Zhongcong Xu, Jianfeng Zhang, Jun Hao Liew, Hanshu Yan, Jia-Wei Liu, Chenxu Zhang, Jiashi Feng, Mike Zheng Shou

Existing animation works typically employ the frame-warping technique to animate the reference image towards the target motion.

Image Animation

FACIAL: Synthesizing Dynamic Talking Face with Implicit Attribute Learning

1 code implementation ICCV 2021 Chenxu Zhang, Yifan Zhao, Yifei HUANG, Ming Zeng, Saifeng Ni, Madhukar Budagavi, Xiaohu Guo

In this paper, we propose a talking face generation method that takes an audio signal as input and a short target video clip as reference, and synthesizes a photo-realistic video of the target face with natural lip motions, head poses, and eye blinks that are in-sync with the input audio signal.

3D Face Animation Attribute +2

Cannot find the paper you are looking for? You can Submit a new open access paper.