no code implementations • 25 Nov 2024 • Yuan Zhou, Qingshan Xu, Jiequan Cui, Junbao Zhou, Jing Zhang, Richang Hong, Hanwang Zhang
In this paper, we propose a new de\textbf{C}oupled du\textbf{A}l-interactive linea\textbf{R} att\textbf{E}ntion (CARE) mechanism, revealing that features' decoupling and interaction can fully unleash the power of linear attention.
1 code implementation • 4 Nov 2024 • Kezheng Xiong, Haoen Xiang, Qingshan Xu, Chenglu Wen, Siqi Shen, Jonathan Li, Cheng Wang
Point cloud registration, a fundamental task in 3D vision, has achieved remarkable success with learning-based methods in outdoor environments.
no code implementations • 23 Oct 2024 • Qingshan Xu, Xuanyu Yi, Jianyao Xu, Wenbing Tao, Yew-Soon Ong, Hanwang Zhang
In this work, we reveal that there exists an inconsistency between the frequency regularization of PE and rendering loss.
1 code implementation • 10 Jun 2024 • Xuanyu Yi, Zike Wu, Qiuhong Shen, Qingshan Xu, Pan Zhou, Joo-Hwee Lim, Shuicheng Yan, Xinchao Wang, Hanwang Zhang
Recent 3D large reconstruction models (LRMs) can generate high-quality 3D content in sub-seconds by integrating multi-view diffusion models with scalable multi-view reconstructors.
no code implementations • 6 Jun 2024 • Salvatore Esposito, Qingshan Xu, Kacper Kania, Charlie Hewitt, Octave Mariotti, Lohit Petikam, Julien Valentin, Arno Onken, Oisin Mac Aodha
We introduce a new generative approach for synthesizing 3D geometry and images from single-view collections.
no code implementations • 22 Apr 2024 • Hao Wang, Qingshan Xu, Hongyuan Chen, Rui Ma
In this work, we introduce PGAHum, a prior-guided geometry and appearance learning framework for high-fidelity animatable human reconstruction.
1 code implementation • CVPR 2024 • Xuanyu Yi, Zike Wu, Qingshan Xu, Pan Zhou, Joo-Hwee Lim, Hanwang Zhang
Score distillation sampling~(SDS) has been widely adopted to overcome the absence of unseen views in reconstructing 3D objects from a \textbf{single} image.
no code implementations • 19 Mar 2024 • Qingshan Xu, Jiao Liu, Melvin Wong, Caishun Chen, Yew-Soon Ong
However, existing generative methods mostly focus on geometric or visual plausibility while ignoring precise physics perception for the generated 3D shapes.
no code implementations • 22 Feb 2024 • Renyi Mao, Qingshan Xu, Peng Zheng, Ye Wang, Tieru Wu, Rui Ma
In this paper, we aim for both fast and high-quality implicit field learning, and propose TaylorGrid, a novel implicit field representation which can be efficiently computed via direct Taylor expansion optimization on 2D or 3D grids.
no code implementations • 23 Jan 2024 • Wanjuan Su, Chen Zhang, Qingshan Xu, Wenbing Tao
While NISR has shown impressive results on simple scenes, it remains challenging to recover delicate geometry from uncontrolled real-world scenes which is caused by its underconstrained optimization.
1 code implementation • CVPR 2024 • Xiaotian Sun, Qingshan Xu, Xinjie Yang, Yu Zang, Cheng Wang
In this work we present P\textsuperscript 2 NeRF to capture global and hierarchical geometry consistency priors from pretrained models thus facilitating few-shot NeRFs in 360^\circ outward-facing indoor scenes.
no code implementations • 18 Dec 2023 • Jianyao Xu, Qingshan Xu, Xinyao Liao, Wanjuan Su, Chen Zhang, Yew-Soon Ong, Wenbing Tao
In this work, we propose a prior-based residual learning paradigm for fast multi-view neural surface reconstruction.
no code implementations • 14 Dec 2023 • Kezheng Xiong, Maoji Zheng, Qingshan Xu, Chenglu Wen, Siqi Shen, Cheng Wang
To the best of our knowledge, our approach is the first to facilitate point cloud registration with skeletal geometric priors.
no code implementations • 12 Oct 2023 • Chen Zhang, Wanjuan Su, Qingshan Xu, Wenbing Tao
Recently, learning multi-view neural surface reconstruction with the supervision of point clouds or depth maps has been a promising way.
no code implementations • ICCV 2023 • Chunlin Ren, Qingshan Xu, Shikun Zhang, Jiaqi Yang
3) A Hierarchical Prior Mining (HPM) framework, which is used to mine extensive non-local prior information at different scales to assist 3D model recovery, this strategy can achieve a considerable balance between the reconstruction of details and low-textured areas.
1 code implementation • 31 May 2022 • Qiancheng Fu, Qingshan Xu, Yew-Soon Ong, Wenbing Tao
Recently, neural implicit surfaces learning by volume rendering has become popular for multi-view reconstruction.
no code implementations • 13 Oct 2021 • Qingshan Xu, Martin R. Oswald, Wenbing Tao, Marc Pollefeys, Zhaopeng Cui
However, existing recurrent methods only model the local dependencies in the depth domain, which greatly limits the capability of capturing the global scene context along the depth dimension.
no code implementations • 15 Jul 2020 • Qingshan Xu, Wenbing Tao
We present a pixelwise visibility network to learn the visibility information for different neighboring images before computing the multi-view similarity, and then construct an adaptive weighted cost volume with the visibility information.
2 code implementations • 26 Dec 2019 • Qingshan Xu, Wenbing Tao
This can be attributed to the memory-consuming cost volume representation and inappropriate depth inference.
1 code implementation • 26 Dec 2019 • Qingshan Xu, Wenbing Tao
In detail, we utilize a probabilistic graphical model to embed planar models into PatchMatch multi-view stereo and contribute a novel multi-view aggregated matching cost.
Ranked #1 on Multi-View 3D Reconstruction on ETH3D
no code implementations • CVPR 2019 • Qingshan Xu, Wenbing Tao
For the depth estimation of low-textured areas, we further propose to combine ACMH with multi-scale geometric consistency guidance (ACMM) to obtain the reliable depth estimates for low-textured areas at coarser scales and guarantee that they can be propagated to finer scales.
Ranked #2 on Multi-View 3D Reconstruction on ETH3D
no code implementations • 21 May 2018 • Qingshan Xu, Wenbing Tao
In computer vision domain, how to fast and accurately perform multiview stereo (MVS) is still a challenging problem.