DuLa-Net: A Dual-Projection Network for Estimating Room Layouts from a Single RGB Panorama

We present a deep learning framework, called DuLa-Net, to predict Manhattan-world 3D room layouts from a single RGB panorama. To achieve better prediction accuracy, our method leverages two projections of the panorama at once, namely the equirectangular panorama-view and the perspective ceiling-view, that each contains different clues about the room layouts. Our network architecture consists of two encoder-decoder branches for analyzing each of the two views. In addition, a novel feature fusion structure is proposed to connect the two branches, which are then jointly trained to predict the 2D floor plans and layout heights. To learn more complex room layouts, we introduce the Realtor360 dataset that contains panoramas of Manhattan-world room layouts with different numbers of corners. Experimental results show that our work outperforms recent state-of-the-art in prediction accuracy and performance, especially in the rooms with non-cuboid layouts.

PDF Abstract CVPR 2019 PDF CVPR 2019 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
3D Room Layouts From A Single RGB Panorama PanoContext DuLa-Net 3DIoU 77.42% # 3
3D Room Layouts From A Single RGB Panorama Realtor360 DuLa-Net 3DIoU 77.20% # 1
3D Room Layouts From A Single RGB Panorama Stanford 2D-3D DuLa-Net 3DIoU 79.36% # 2


No methods listed for this paper. Add relevant methods here