1 code implementation • CVPR 2024 • Ziyue Feng, Huangying Zhan, Zheng Chen, Qingan Yan, Xiangyu Xu, Changjiang Cai, Bing Li, Qilun Zhu, Yi Xu
We present NARUTO, a neural active reconstruction system that combines a hybrid neural representation with uncertainty learning, enabling high-fidelity surface reconstruction.
no code implementations • 30 Dec 2023 • Zheng Chen, Qingan Yan, Huangying Zhan, Changjiang Cai, Xiangyu Xu, Yuzhong Huang, Weihan Wang, Ziyue Feng, Lantao Liu, Yi Xu
Through extensive experiments, we demonstrate the effectiveness of PlanarNeRF in various scenarios and remarkable improvement over existing works.
no code implementations • 12 Apr 2023 • Xiangyu Xu, Lichang Chen, Changjiang Cai, Huangying Zhan, Qingan Yan, Pan Ji, Junsong Yuan, Heng Huang, Yi Xu
Direct optimization of interpolated features on multi-resolution voxel grids has emerged as a more efficient alternative to MLP-like modules.
no code implementations • 23 Nov 2022 • Huangying Zhan, Jiyang Zheng, Yi Xu, Ian Reid, Hamid Rezatofighi
We, for the first time, present an RGB-only active vision framework using radiance field representation for active 3D reconstruction and planning in an online manner.
no code implementations • 23 Nov 2022 • Huangying Zhan, Hamid Rezatofighi, Ian Reid
We propose a robotic learning system for autonomous exploration and navigation in unexplored environments.
1 code implementation • 14 Nov 2022 • Junlin Han, Huangying Zhan, Jie Hong, Pengfei Fang, Hongdong Li, Lars Petersson, Ian Reid
This paper studies the problem of measuring and predicting how memorable an image is to pattern recognition machines, as a path to explore machine intelligence.
2 code implementations • 7 Nov 2022 • Libo Sun, Jia-Wang Bian, Huangying Zhan, Wei Yin, Ian Reid, Chunhua Shen
Self-supervised monocular depth estimation has shown impressive results in static scenes.
Indoor Monocular Depth Estimation Monocular Depth Estimation +1
2 code implementations • 25 May 2021 • Jia-Wang Bian, Huangying Zhan, Naiyan Wang, Zhichao Li, Le Zhang, Chunhua Shen, Ming-Ming Cheng, Ian Reid
We propose a monocular depth estimator SC-Depth, which requires only unlabelled videos for training and enables the scale-consistent prediction at inference time.
2 code implementations • 1 Mar 2021 • Huangying Zhan, Chamara Saroj Weerasekera, Jia-Wang Bian, Ravi Garg, Ian Reid
More surprisingly, they show that the well-trained networks enable scale-consistent predictions over long videos, while the accuracy is still inferior to traditional methods because of ignoring geometric information.
1 code implementation • 4 Jun 2020 • Jia-Wang Bian, Huangying Zhan, Naiyan Wang, Tat-Jun Chin, Chunhua Shen, Ian Reid
However, excellent results have mostly been obtained in street-scene driving scenarios, and such methods often fail in other settings, particularly indoor videos taken by handheld devices.
Ranked #50 on Monocular Depth Estimation on NYU-Depth V2
2 code implementations • 21 Sep 2019 • Huangying Zhan, Chamara Saroj Weerasekera, Jia-Wang Bian, Ian Reid
In this work we present a monocular visual odometry (VO) algorithm which leverages geometry-based methods and deep learning.
2 code implementations • NeurIPS 2019 • Jia-Wang Bian, Zhichao Li, Naiyan Wang, Huangying Zhan, Chunhua Shen, Ming-Ming Cheng, Ian Reid
To the best of our knowledge, this is the first work to show that deep networks trained using unlabelled monocular videos can predict globally scale-consistent camera trajectories over a long video sequence.
Ranked #3 on Camera Pose Estimation on KITTI Odometry Benchmark
no code implementations • 1 Mar 2019 • Huangying Zhan, Chamara Saroj Weerasekera, Ravi Garg, Ian Reid
In this work we present a self-supervised learning framework to simultaneously train two Convolutional Neural Networks (CNNs) to predict depth and surface normals from a single image.
Ranked #64 on Monocular Depth Estimation on KITTI Eigen split
no code implementations • ECCV 2018 • Kejie Li, Trung Pham, Huangying Zhan, Ian Reid
Given a single image at an arbitrary viewpoint, a CNN predicts multiple surfaces, each in a canonical location relative to the object.
1 code implementation • CVPR 2018 • Huangying Zhan, Ravi Garg, Chamara Saroj Weerasekera, Kejie Li, Harsh Agarwal, Ian Reid
Despite learning based methods showing promising results in single view depth estimation and visual odometry, most existing approaches treat the tasks in a supervised manner.