Search Results for author: Xiangjun Gao

Found 8 papers, 2 papers with code

StereoCrafter: Diffusion-based Generation of Long and High-fidelity Stereoscopic 3D from Monocular Videos

no code implementations11 Sep 2024 Sijie Zhao, WenBo Hu, Xiaodong Cun, Yong Zhang, Xiaoyu Li, Zhe Kong, Xiangjun Gao, Muyao Niu, Ying Shan

This paper presents a novel framework for converting 2D videos to immersive stereoscopic 3D, addressing the growing demand for 3D content in immersive experience.

Video Inpainting

ViewCrafter: Taming Video Diffusion Models for High-fidelity Novel View Synthesis

1 code implementation3 Sep 2024 Wangbo Yu, Jinbo Xing, Li Yuan, WenBo Hu, Xiaoyu Li, Zhipeng Huang, Xiangjun Gao, Tien-Tsin Wong, Ying Shan, Yonghong Tian

Our method takes advantage of the powerful generation capabilities of video diffusion model and the coarse 3D clues offered by point-based representation to generate high-quality video frames with precise camera pose control.

3D Generation 3D Reconstruction +3

DepthCrafter: Generating Consistent Long Depth Sequences for Open-world Videos

1 code implementation3 Sep 2024 WenBo Hu, Xiangjun Gao, Xiaoyu Li, Sijie Zhao, Xiaodong Cun, Yong Zhang, Long Quan, Ying Shan

Our training approach enables the model to generate depth sequences with variable lengths at one time, up to 110 frames, and harvest both precise depth details and rich content diversity from realistic and synthetic datasets.

Diversity Monocular Depth Estimation +2

MagicMan: Generative Novel View Synthesis of Humans with 3D-Aware Diffusion and Iterative Refinement

no code implementations26 Aug 2024 Xu He, Xiaoyu Li, Di Kang, Jiangnan Ye, Chaopeng Zhang, Liyang Chen, Xiangjun Gao, Han Zhang, Zhiyong Wu, Haolin Zhuang

Existing works in single-image human reconstruction suffer from weak generalizability due to insufficient training data or 3D inconsistencies for a lack of comprehensive multi-view knowledge.

3D Human Reconstruction Novel View Synthesis

Mani-GS: Gaussian Splatting Manipulation with Triangular Mesh

no code implementations28 May 2024 Xiangjun Gao, Xiaoyu Li, Yiyu Zhuang, Qi Zhang, WenBo Hu, Chaopeng Zhang, Yao Yao, Ying Shan, Long Quan

This approach reduces the need to design various algorithms for different types of Gaussian manipulation.

3DGS NeRF +1

ConTex-Human: Free-View Rendering of Human from a Single Image with Texture-Consistent Synthesis

no code implementations CVPR 2024 Xiangjun Gao, Xiaoyu Li, Chaopeng Zhang, Qi Zhang, YanPei Cao, Ying Shan, Long Quan

In this work, we propose a method to address the challenge of rendering a 3D human from a single image in a free-view manner.

HiFi-123: Towards High-fidelity One Image to 3D Content Generation

no code implementations10 Oct 2023 Wangbo Yu, Li Yuan, Yan-Pei Cao, Xiangjun Gao, Xiaoyu Li, WenBo Hu, Long Quan, Ying Shan, Yonghong Tian

Our contributions are twofold: First, we propose a Reference-Guided Novel View Enhancement (RGNV) technique that significantly improves the fidelity of diffusion-based zero-shot novel view synthesis methods.

3D Generation Image to 3D +1

MPS-NeRF: Generalizable 3D Human Rendering from Multiview Images

no code implementations31 Mar 2022 Xiangjun Gao, Jiaolong Yang, Jongyoo Kim, Sida Peng, Zicheng Liu, Xin Tong

For this task, we propose a simple yet effective method to train a generalizable NeRF with multiview images as conditional input.

NeRF Novel View Synthesis

Cannot find the paper you are looking for? You can Submit a new open access paper.