Search Results for author: Yu-Hui Wen

Found 9 papers, 3 papers with code

Exploring Text-to-Motion Generation with Human Preference

no code implementations15 Apr 2024 Jenny Sheng, Matthieu Lin, Andrew Zhao, Kevin Pruvost, Yu-Hui Wen, Yangguang Li, Gao Huang, Yong-Jin Liu

This paper presents an exploration of preference learning in text-to-motion generation.

Text-Image Conditioned Diffusion for Consistent Text-to-3D Generation

no code implementations19 Dec 2023 Yuze He, Yushi Bai, Matthieu Lin, Jenny Sheng, Yubin Hu, Qi Wang, Yu-Hui Wen, Yong-Jin Liu

By lifting the pre-trained 2D diffusion models into Neural Radiance Fields (NeRFs), text-to-3D generation methods have made great progress.

3D Generation Text to 3D

DiffPoseTalk: Speech-Driven Stylistic 3D Facial Animation and Head Pose Generation via Diffusion Models

no code implementations30 Sep 2023 Zhiyao Sun, Tian Lv, Sheng Ye, Matthieu Gaetan Lin, Jenny Sheng, Yu-Hui Wen, MinJing Yu, Yong-Jin Liu

The generation of stylistic 3D facial animations driven by speech poses a significant challenge as it requires learning a many-to-many mapping between speech, style, and the corresponding natural facial motion.

Indoor Scene Reconstruction with Fine-Grained Details Using Hybrid Representation and Normal Prior Enhancement

1 code implementation14 Sep 2023 Sheng Ye, Yubin Hu, Matthieu Lin, Yu-Hui Wen, Wang Zhao, Yong-Jin Liu, Wenping Wang

To enhance the normal priors, we introduce a simple yet effective image sharpening and denoising technique, coupled with a network that estimates the pixel-wise uncertainty of the predicted surface normal vectors.

Denoising Indoor Scene Reconstruction

O$^2$-Recon: Completing 3D Reconstruction of Occluded Objects in the Scene with a Pre-trained 2D Diffusion Model

1 code implementation18 Aug 2023 Yubin Hu, Sheng Ye, Wang Zhao, Matthieu Lin, Yuze He, Yu-Hui Wen, Ying He, Yong-Jin Liu

In this paper, we propose a novel framework, empowered by a 2D diffusion-based in-painting model, to reconstruct complete surfaces for the hidden parts of objects.

3D Reconstruction Blocking

Continuously Controllable Facial Expression Editing in Talking Face Videos

no code implementations17 Sep 2022 Zhiyao Sun, Yu-Hui Wen, Tian Lv, Yanan sun, Ziyang Zhang, Yaoyuan Wang, Yong-Jin Liu

In this paper, we propose a high-quality facial expression editing method for talking face videos, allowing the user to control the target emotion in the edited video continuously.

Image-to-Image Translation Video Generation

Dynamic Neural Textures: Generating Talking-Face Videos with Continuously Controllable Expressions

no code implementations13 Apr 2022 Zipeng Ye, Zhiyao Sun, Yu-Hui Wen, Yanan sun, Tian Lv, Ran Yi, Yong-Jin Liu

In this paper, we propose a method to generate talking-face videos with continuously controllable expressions in real-time.

Video Generation

PD-Flow: A Point Cloud Denoising Framework with Normalizing Flows

1 code implementation11 Mar 2022 Aihua Mao, Zihui Du, Yu-Hui Wen, Jun Xuan, Yong-Jin Liu

By considering noisy point clouds as a joint distribution of clean points and noise, the denoised results can be derived from disentangling the noise counterpart from latent point representation, and the mapping between Euclidean and latent spaces is modeled by normalizing flows.

Denoising Disentanglement

Autoregressive Stylized Motion Synthesis With Generative Flow

no code implementations CVPR 2021 Yu-Hui Wen, Zhipeng Yang, Hongbo Fu, Lin Gao, Yanan sun, Yong-Jin Liu

Motion style transfer is an important problem in many computer graphics and computer vision applications, including human animation, games, and robotics.

Motion Style Transfer Style Transfer

Cannot find the paper you are looking for? You can Submit a new open access paper.