3D Room Layout Estimation from a Cubemap of Panorama Image via Deep Manhattan Hough Transform

19 Jul 2022  ·  Yining Zhao, Chao Wen, Zhou Xue, Yue Gao ·

Significant geometric structures can be compactly described by global wireframes in the estimation of 3D room layout from a single panoramic image. Based on this observation, we present an alternative approach to estimate the walls in 3D space by modeling long-range geometric patterns in a learnable Hough Transform block. We transform the image feature from a cubemap tile to the Hough space of a Manhattan world and directly map the feature to the geometric output. The convolutional layers not only learn the local gradient-like line features, but also utilize the global information to successfully predict occluded walls with a simple network structure. Unlike most previous work, the predictions are performed individually on each cubemap tile, and then assembled to get the layout estimation. Experimental results show that we achieve comparable results with recent state-of-the-art in prediction accuracy and performance. Code is available at https://github.com/Starrah/DMH-Net.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
3D Room Layouts From A Single RGB Panorama PanoContext DMH-Net 3DIoU 85.48 # 1
3D Room Layouts From A Single RGB Panorama Stanford2D3D Panoramic DMH-Net 3DIoU 84.93 # 1
Corner Error 0.67 # 2
Pixel Error 1.93 # 1

Methods


No methods listed for this paper. Add relevant methods here