Search Results for author: Junliang Ye

Found 3 papers, 0 papers with code

ReconX: Reconstruct Any Scene from Sparse Views with Video Diffusion Model

no code implementations29 Aug 2024 Fangfu Liu, Wenqiang Sun, HanYang Wang, Yikai Wang, Haowen Sun, Junliang Ye, Jun Zhang, Yueqi Duan

Advancements in 3D scene reconstruction have transformed 2D images from the real world into 3D models, producing realistic 3D results from hundreds of input photos.

3D Scene Reconstruction

DreamReward: Text-to-3D Generation with Human Preference

no code implementations21 Mar 2024 Junliang Ye, Fangfu Liu, Qixiu Li, Zhengyi Wang, Yikai Wang, Xinzhou Wang, Yueqi Duan, Jun Zhu

Building upon the 3D reward model, we finally perform theoretical analysis and present the Reward3D Feedback Learning (DreamFL), a direct tuning algorithm to optimize the multi-view diffusion models with a redefined scorer.

3D Generation Text to 3D +1

AnimatableDreamer: Text-Guided Non-rigid 3D Model Generation and Reconstruction with Canonical Score Distillation

no code implementations6 Dec 2023 Xinzhou Wang, Yikai Wang, Junliang Ye, Zhengyi Wang, Fuchun Sun, Pengkun Liu, Ling Wang, Kai Sun, Xintong Wang, Bin He

Extensive experiments demonstrate the capability of our method in generating high-flexibility text-guided 3D models from the monocular video, while also showing improved reconstruction performance over existing non-rigid reconstruction methods.

3D Generation Denoising +1

Cannot find the paper you are looking for? You can Submit a new open access paper.