Search Results for author: HanYang Wang

Found 8 papers, 4 papers with code

ReconX: Reconstruct Any Scene from Sparse Views with Video Diffusion Model

no code implementations29 Aug 2024 Fangfu Liu, Wenqiang Sun, HanYang Wang, Yikai Wang, Haowen Sun, Junliang Ye, Jun Zhang, Yueqi Duan

Advancements in 3D scene reconstruction have transformed 2D images from the real world into 3D models, producing realistic 3D results from hundreds of input photos.

3D Scene Reconstruction

Physics3D: Learning Physical Properties of 3D Gaussians via Video Diffusion

no code implementations6 Jun 2024 Fangfu Liu, HanYang Wang, Shunyu Yao, Shengjun Zhang, Jie zhou, Yueqi Duan

In recent years, there has been rapid development in 3D generation models, opening up new possibilities for applications such as simulating the dynamic movements of 3D objects and customizing their behaviors.

3D Generation

Unique3D: High-Quality and Efficient 3D Mesh Generation from a Single Image

1 code implementation30 May 2024 Kailu Wu, Fangfu Liu, Zhihan Cai, Runjie Yan, HanYang Wang, Yating Hu, Yueqi Duan, Kaisheng Ma

In this work, we introduce Unique3D, a novel image-to-3D framework for efficiently generating high-quality 3D meshes from single-view images, featuring state-of-the-art generation fidelity and strong generalizability.

Image to 3D Single-View 3D Reconstruction +1

Make-Your-3D: Fast and Consistent Subject-Driven 3D Content Generation

no code implementations14 Mar 2024 Fangfu Liu, HanYang Wang, Weiliang Chen, Haowen Sun, Yueqi Duan

Recent years have witnessed the strong power of 3D generation models, which offer a new level of creative flexibility by allowing users to guide the 3D content generation process through a single image or natural language.

3D Generation

A cross-modal fusion network based on self-attention and residual structure for multimodal emotion recognition

1 code implementation3 Nov 2021 Ziwang Fu, Feng Liu, HanYang Wang, Jiayin Qi, Xiangling Fu, Aimin Zhou, Zhibin Li

Firstly, we perform representation learning for audio and video modalities to obtain the semantic features of the two modalities by efficient ResNeXt and 1D CNN, respectively.

Multimodal Emotion Recognition Representation Learning

EvoGAN: An Evolutionary Computation Assisted GAN

1 code implementation22 Oct 2021 Feng Liu, HanYang Wang, Jiahao Zhang, Ziwang Fu, Aimin Zhou, Jiayin Qi, Zhibin Li

Quantitative and Qualitative results are presented on several compound expressions, and the experimental results demonstrate the feasibility and the potential of EvoGAN.

Image Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.