HoHoNet: 360 Indoor Holistic Understanding with Latent Horizontal Features

CVPR 2021  ·  Cheng Sun, Min Sun, Hwann-Tzong Chen ·

We present HoHoNet, a versatile and efficient framework for holistic understanding of an indoor 360-degree panorama using a Latent Horizontal Feature (LHFeat). The compact LHFeat flattens the features along the vertical direction and has shown success in modeling per-column modality for room layout reconstruction. HoHoNet advances in two important aspects. First, the deep architecture is redesigned to run faster with improved accuracy. Second, we propose a novel horizon-to-dense module, which relaxes the per-column output shape constraint, allowing per-pixel dense prediction from LHFeat. HoHoNet is fast: It runs at 52 FPS and 110 FPS with ResNet-50 and ResNet-34 backbones respectively, for modeling dense modalities from a high-resolution $512 \times 1024$ panorama. HoHoNet is also accurate. On the tasks of layout estimation and semantic segmentation, HoHoNet achieves results on par with current state-of-the-art. On dense depth estimation, HoHoNet outperforms all the prior arts by a large margin.

PDF Abstract CVPR 2021 PDF CVPR 2021 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Semantic Segmentation Stanford2D3D Panoramic HoHoNet (ResNet-101) mIoU 52.0% # 13
mAcc 65.0 # 7
3D Room Layouts From A Single RGB Panorama Stanford2D3D Panoramic HoHoNet (ResNet-101) 3DIoU 79.88 # 6
Depth Estimation Stanford2D3D Panoramic HoHoNet (ResNet-101) RMSE 0.3834 # 14
absolute relative error 0.1014 # 10
Semantic Segmentation Stanford2D3D Panoramic - RGBD HoHoNet (ResNet-101) mIoU 56.3 # 2
mAcc 68.9 # 3

Methods


No methods listed for this paper. Add relevant methods here