Toward Practical Monocular Indoor Depth Estimation

The majority of prior monocular depth estimation methods without groundtruth depth guidance focus on driving scenarios. We show that such methods generalize poorly to unseen complex indoor scenes, where objects are cluttered and arbitrarily arranged in the near field. To obtain more robustness, we propose a structure distillation approach to learn knacks from an off-the-shelf relative depth estimator that produces structured but metric-agnostic depth. By combining structure distillation with a branch that learns metrics from left-right consistency, we attain structured and metric depth for generic indoor scenes and make inferences in real-time. To facilitate learning and evaluation, we collect SimSIN, a dataset from simulation with thousands of environments, and UniSIN, a dataset that contains about 500 real scan sequences of generic indoor environments. We experiment in both sim-to-real and real-to-real settings, and show improvements, as well as in downstream applications using our depth maps. This work provides a full study, covering methods, data, and applications aspects.

PDF Abstract CVPR 2022 PDF CVPR 2022 Abstract

Datasets


Introduced in the Paper:

VA (Virtual Apartment)

Used in the Paper:

NYUv2 Replica Hypersim HM3D
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Monocular Depth Estimation NYU-Depth V2 self-supervised DistDepth Root mean square error (RMSE) 0.517 # 2
Absolute relative error (AbsRel) 0.130 # 2
delta_1 83.2 # 2
delta_2 96.3 # 2
delta_3 99.0 # 2
Monocular Depth Estimation VA (Virtual Apartment) DistDepth Root mean square error (RMSE) 0.374 # 1
Log root mean square error (RMSE_log) 0.213 # 1
Mean average error (MAE) 0.253 # 1
Absolute relative error (AbsRel) 0.175 # 1

Methods


No methods listed for this paper. Add relevant methods here