1 code implementation • 19 Oct 2024 • Kun Wang, Zhiqiang Yan, Junkai Fan, Wanlu Zhu, Xiang Li, Jun Li, Jian Yang
In this paper, we introduce DCDepth, a novel framework for the long-standing monocular depth estimation task.
1 code implementation • 15 Oct 2024 • Zhengxue Wang, Zhiqiang Yan, Jinshan Pan, Guangwei Gao, Kai Zhang, Jian Yang
Recent RGB-guided depth super-resolution methods have achieved impressive performance under the assumption of fixed and known degradation (e. g., bicubic downsampling).
1 code implementation • 12 Sep 2024 • Yuan Wu, Zhiqiang Yan, Zhengxue Wang, Xiang Li, Le Hui, Jian Yang
MGHS projects the 2D image features into multiple subspaces, where each grid contains features within reasonable height ranges.
no code implementations • 25 May 2024 • Jiangwei Weng, Zhiqiang Yan, Ying Tai, Jianjun Qian, Jian Yang, Jun Li
In this paper, we introduce MambaLLIE, an implicit Retinex-aware low light enhancer featuring a global-then-local state space design.
no code implementations • CVPR 2024 • Zhiqiang Yan, Yuankai Lin, Kun Wang, Yupeng Zheng, YuFei Wang, Zhenyu Zhang, Jun Li, Jian Yang
Depth completion is a vital task for autonomous driving, as it involves reconstructing the precise 3D geometry of a scene from sparse and noisy depth measurements.
no code implementations • 21 Feb 2024 • Zhengxue Wang, Zhiqiang Yan, Ming-Hsuan Yang, Jinshan Pan, Guangwei Gao, Ying Tai, Jian Yang
Specifically, we design an All-in-one Prior Propagation that computes the similarity between multi-modal scene priors, i. e., RGB, normal, semantic, and depth, to reduce the texture interference.
1 code implementation • 10 Dec 2023 • Zhengxue Wang, Zhiqiang Yan, Jian Yang
Recent image guided DSR approaches mainly focus on spatial domain to rebuild depth structure.
no code implementations • 1 Sep 2023 • Zhiqiang Yan, Xiang Li, Le Hui, Zhenyu Zhang, Jun Li, Jian Yang
To tackle these challenges, we explore a repetitive design in our image guided network to gradually and sufficiently recover depth values.
no code implementations • 19 Aug 2023 • Kun Wang, Zhiqiang Yan, Huang Tian, Zhenyu Zhang, Xiang Li, Jun Li, Jian Yang
Neural Radiance Fields (NeRF) have shown promise in generating realistic novel views from sparse scene images.
1 code implementation • 26 Jun 2023 • Zhiqiang Yan, Yupeng Zheng, Chongyi Li, Jun Li, Jian Yang
Depth completion is the task of recovering dense depth maps from sparse ones, usually with the help of color images.
no code implementations • 8 Jun 2023 • Kun Wang, Zhiqiang Yan, Zhenyu Zhang, Xiang Li, Jun Li, Jian Yang
Our key contributions are: (1) We parameterize the geometry and appearance of the object using a multi-scale global feature extractor, which avoids frequent point-wise feature retrieval and camera dependency.
no code implementations • 20 Nov 2022 • Zhiqiang Yan, Kun Wang, Xiang Li, Zhenyu Zhang, Jun Li, Jian Yang
Unsupervised depth completion aims to recover dense depth from the sparse one without using the ground-truth annotation.
no code implementations • 30 Aug 2022 • Xiang Liu, Hongyuan Wang, Zhiqiang Yan, Yu Chen, Xinlong Chen, WeiChun Chen
In this way, the background interference to foreground spacecraft depth completion is effectively avoided.
1 code implementation • 18 Mar 2022 • Zhiqiang Yan, Xiang Li, Kun Wang, Zhenyu Zhang, Jun Li, Jian Yang
To deal with the PDC task, we train a deep network that takes both depth and image as inputs for the dense panoramic depth recovery.
2 code implementations • ICCV 2021 • Kun Wang, Zhenyu Zhang, Zhiqiang Yan, Xiang Li, Baobei Xu, Jun Li, Jian Yang
Monocular depth estimation aims at predicting depth from a single image or video.
no code implementations • 29 Jul 2021 • Zhiqiang Yan, Kun Wang, Xiang Li, Zhenyu Zhang, Jun Li, Jian Yang
However, blurry guidance in the image and unclear structure in the depth still impede the performance of the image guided frameworks.
Ranked #2 on Depth Completion on KITTI Depth Completion
1 code implementation • 24 Nov 2020 • Hongyuan Wang, Xiang Liu, Wen Kang, Zhiqiang Yan, Bingwen Wang, Qianhao Ning
In the correspondences credibility computation module, based on the conflicted relationship between the features matching matrix and the coordinates matching matrix, we score the reliability for each correspondence, which can reduce the impact of mismatched or non-matched points.