Search Results for author: Xiaoyang Lyu

Found 13 papers, 8 papers with code

3DGSR: Implicit Surface Reconstruction with 3D Gaussian Splatting

no code implementations30 Mar 2024 Xiaoyang Lyu, Yang-tian Sun, Yi-Hua Huang, Xiuzhe Wu, ZiYi Yang, Yilun Chen, Jiangmiao Pang, Xiaojuan Qi

In this paper, we present an implicit surface reconstruction method with 3D Gaussian Splatting (3DGS), namely 3DGSR, that allows for accurate 3D reconstruction with intricate details while inheriting the high efficiency and rendering quality of 3DGS.

3D Reconstruction Surface Reconstruction

DO3D: Self-supervised Learning of Decomposed Object-aware 3D Motion and Depth from Monocular Videos

no code implementations9 Mar 2024 Xiuzhe Wu, Xiaoyang Lyu, Qihao Huang, Yong liu, Yang Wu, Ying Shan, Xiaojuan Qi

Our system contains a depth estimation module to predict depth, and a new decomposed object-wise 3D motion (DO3D) estimation module to predict ego-motion and 3D object motion.

Depth Estimation Disentanglement +5

Spec-Gaussian: Anisotropic View-Dependent Appearance for 3D Gaussian Splatting

no code implementations24 Feb 2024 ZiYi Yang, Xinyu Gao, Yangtian Sun, Yihua Huang, Xiaoyang Lyu, Wen Zhou, Shaohui Jiao, Xiaojuan Qi, Xiaogang Jin

The recent advancements in 3D Gaussian splatting (3D-GS) have not only facilitated real-time rendering through modern GPU rasterization pipelines but have also attained state-of-the-art rendering quality.

SC-GS: Sparse-Controlled Gaussian Splatting for Editable Dynamic Scenes

1 code implementation4 Dec 2023 Yi-Hua Huang, Yang-tian Sun, ZiYi Yang, Xiaoyang Lyu, Yan-Pei Cao, Xiaojuan Qi

During learning, the location and number of control points are adaptively adjusted to accommodate varying motion complexities in different regions, and an ARAP loss following the principle of as rigid as possible is developed to enforce spatial continuity and local rigidity of learned motions.

Novel View Synthesis

Speech2Lip: High-fidelity Speech to Lip Generation by Learning from a Short Video

1 code implementation ICCV 2023 Xiuzhe Wu, Pengfei Hu, Yang Wu, Xiaoyang Lyu, Yan-Pei Cao, Ying Shan, Wenming Yang, Zhongqian Sun, Xiaojuan Qi

Therefore, directly learning a mapping function from speech to the entire head image is prone to ambiguity, particularly when using a short video for training.

Image Generation

Learning a Room with the Occ-SDF Hybrid: Signed Distance Function Mingled with Occupancy Aids Scene Representation

1 code implementation ICCV 2023 Xiaoyang Lyu, Peng Dai, Zizhang Li, Dongyu Yan, Yi Lin, Yifan Peng, Xiaojuan Qi

We found that the color rendering loss results in optimization bias against low-intensity areas, causing gradient vanishing and leaving these areas unoptimized.

Neural Rendering Surface Reconstruction

Efficient Implicit Neural Reconstruction Using LiDAR

no code implementations28 Feb 2023 Dongyu Yan, Xiaoyang Lyu, Jieqi Shi, Yi Lin

Modeling scene geometry using implicit neural representation has revealed its advantages in accuracy, flexibility, and low memory usage.

3D Reconstruction

FCFR-Net: Feature Fusion based Coarse-to-Fine Residual Learning for Depth Completion

no code implementations15 Dec 2020 Lina Liu, Xibin Song, Xiaoyang Lyu, Junwei Diao, Mengmeng Wang, Yong liu, Liangjun Zhang

Then, a refined depth map is further obtained using a residual learning strategy in the coarse-to-fine stage with a coarse depth map and color image as input.

Depth Completion

HR-Depth: High Resolution Self-Supervised Monocular Depth Estimation

1 code implementation14 Dec 2020 Xiaoyang Lyu, Liang Liu, Mengmeng Wang, Xin Kong, Lina Liu, Yong liu, Xinxin Chen, Yi Yuan

To obtainmore accurate depth estimation in large gradient regions, itis necessary to obtain high-resolution features with spatialand semantic information.

Monocular Depth Estimation Self-Supervised Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.