Efficient Multi-Task RGB-D Scene Analysis for Indoor Environments

10 Jul 2022  ·  Daniel Seichter, Söhnke Benedikt Fischedick, Mona Köhler, Horst-Michael Groß ·

Semantic scene understanding is essential for mobile agents acting in various environments. Although semantic segmentation already provides a lot of information, details about individual objects as well as the general scene are missing but required for many real-world applications. However, solving multiple tasks separately is expensive and cannot be accomplished in real time given limited computing and battery capabilities on a mobile platform. In this paper, we propose an efficient multi-task approach for RGB-D scene analysis~(EMSANet) that simultaneously performs semantic and instance segmentation~(panoptic segmentation), instance orientation estimation, and scene classification. We show that all tasks can be accomplished using a single neural network in real time on a mobile platform without diminishing performance - by contrast, the individual tasks are able to benefit from each other. In order to evaluate our multi-task approach, we extend the annotations of the common RGB-D indoor datasets NYUv2 and SUNRGB-D for instance segmentation and orientation estimation. To the best of our knowledge, we are the first to provide results in such a comprehensive multi-task setting for indoor scene analysis on NYUv2 and SUNRGB-D.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Scene Classification (unified classes) NYU Depth v2 EMSANet Balanced Accuracy 75.25 # 1
Panoptic Segmentation NYU Depth v2 EMSANet PQ 47.38 # 1
Semantic Segmentation NYU Depth v2 EMSANet (2x ResNet-34 NBt1D, finetuned) Mean IoU 53.34% # 25
Semantic Segmentation SUN-RGBD DPLNet Mean IoU 48.47% # 22
Scene Classification (unified classes) SUN-RGBD EMSANet Balanced Accuracy 57.22 # 1
Panoptic Segmentation SUN-RGBD EMSANet PQ 52.84 # 1

Methods


No methods listed for this paper. Add relevant methods here