LED2-Net: Monocular 360 Layout Estimation via Differentiable Depth Rendering

1 Apr 2021  ·  Fu-En Wang, Yu-Hsuan Yeh, Min Sun, Wei-Chen Chiu, Yi-Hsuan Tsai ·

Although significant progress has been made in room layout estimation, most methods aim to reduce the loss in the 2D pixel coordinate rather than exploiting the room structure in the 3D space. Towards reconstructing the room layout in 3D, we formulate the task of 360 layout estimation as a problem of predicting depth on the horizon line of a panorama. Specifically, we propose the Differentiable Depth Rendering procedure to make the conversion from layout to depth prediction differentiable, thus making our proposed model end-to-end trainable while leveraging the 3D geometric information, without the need of providing the ground truth depth. Our method achieves state-of-the-art performance on numerous 360 layout benchmark datasets. Moreover, our formulation enables a pre-training step on the depth dataset, which further improves the generalizability of our layout estimation model.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
3D Room Layouts From A Single RGB Panorama Stanford2D3D Panoramic LED2-Net 3DIoU 83.77 # 3

Methods


No methods listed for this paper. Add relevant methods here