no code implementations • 30 Mar 2024 • Xiaoyang Lyu, Yang-tian Sun, Yi-Hua Huang, Xiuzhe Wu, ZiYi Yang, Yilun Chen, Jiangmiao Pang, Xiaojuan Qi
In this paper, we present an implicit surface reconstruction method with 3D Gaussian Splatting (3DGS), namely 3DGSR, that allows for accurate 3D reconstruction with intricate details while inheriting the high efficiency and rendering quality of 3DGS.
1 code implementation • 28 Mar 2024 • Xiaoyang Lyu, Chirui Chang, Peng Dai, Yang-tian Sun, Xiaojuan Qi
Scene reconstruction from multi-view images is a fundamental problem in computer vision and graphics.
no code implementations • 9 Mar 2024 • Xiuzhe Wu, Xiaoyang Lyu, Qihao Huang, Yong liu, Yang Wu, Ying Shan, Xiaojuan Qi
Our system contains a depth estimation module to predict depth, and a new decomposed object-wise 3D motion (DO3D) estimation module to predict ego-motion and 3D object motion.
no code implementations • 24 Feb 2024 • ZiYi Yang, Xinyu Gao, Yangtian Sun, Yihua Huang, Xiaoyang Lyu, Wen Zhou, Shaohui Jiao, Xiaojuan Qi, Xiaogang Jin
The recent advancements in 3D Gaussian splatting (3D-GS) have not only facilitated real-time rendering through modern GPU rasterization pipelines but have also attained state-of-the-art rendering quality.
1 code implementation • 6 Feb 2024 • Xin Kong, Shikun Liu, Xiaoyang Lyu, Marwan Taher, Xiaojuan Qi, Andrew J. Davison
We introduce EscherNet, a multi-view conditioned diffusion model for view synthesis.
1 code implementation • 4 Dec 2023 • Yi-Hua Huang, Yang-tian Sun, ZiYi Yang, Xiaoyang Lyu, Yan-Pei Cao, Xiaojuan Qi
During learning, the location and number of control points are adaptively adjusted to accommodate varying motion complexities in different regions, and an ARAP loss following the principle of as rigid as possible is developed to enforce spatial continuity and local rigidity of learned motions.
1 code implementation • ICCV 2023 • Xiuzhe Wu, Pengfei Hu, Yang Wu, Xiaoyang Lyu, Yan-Pei Cao, Ying Shan, Wenming Yang, Zhongqian Sun, Xiaojuan Qi
Therefore, directly learning a mapping function from speech to the entire head image is prone to ambiguity, particularly when using a short video for training.
1 code implementation • CVPR 2023 • Peng Dai, yinda zhang, Xin Yu, Xiaoyang Lyu, Xiaojuan Qi
Rendering novel view images is highly desirable for many applications.
1 code implementation • ICCV 2023 • Xiaoyang Lyu, Peng Dai, Zizhang Li, Dongyu Yan, Yi Lin, Yifan Peng, Xiaojuan Qi
We found that the color rendering loss results in optimization bias against low-intensity areas, causing gradient vanishing and leaving these areas unoptimized.
1 code implementation • ICCV 2023 • Zizhang Li, Xiaoyang Lyu, Yuanyuan Ding, Mengmeng Wang, Yiyi Liao, Yong liu
Recently, neural implicit surfaces have become popular for multi-view reconstruction.
no code implementations • 28 Feb 2023 • Dongyu Yan, Xiaoyang Lyu, Jieqi Shi, Yi Lin
Modeling scene geometry using implicit neural representation has revealed its advantages in accuracy, flexibility, and low memory usage.
no code implementations • 15 Dec 2020 • Lina Liu, Xibin Song, Xiaoyang Lyu, Junwei Diao, Mengmeng Wang, Yong liu, Liangjun Zhang
Then, a refined depth map is further obtained using a residual learning strategy in the coarse-to-fine stage with a coarse depth map and color image as input.
1 code implementation • 14 Dec 2020 • Xiaoyang Lyu, Liang Liu, Mengmeng Wang, Xin Kong, Lina Liu, Yong liu, Xinxin Chen, Yi Yuan
To obtainmore accurate depth estimation in large gradient regions, itis necessary to obtain high-resolution features with spatialand semantic information.
Ranked #7 on Unsupervised Monocular Depth Estimation on KITTI-C