MFNet: Towards real-time semantic segmentation for autonomous vehicles with multi-spectral scenes

This work addresses the semantic segmentation of images of street scenes for autonomous vehicles based on a new RGB-Thermal dataset, which is also introduced in this paper. An increasing interest in self-driving vehicles has brought the adaptation of semantic segmentation to self-driving systems. However, recent research relating to semantic segmentation is mainly based on RGB images acquired during times of poor visibility at night and under adverse weather conditions. Furthermore, most of these methods only focused on improving performance while ignoring time consumption. The aforementioned problems prompted us to propose a new convolutional neural network architecture for multi-spectral image segmentation that enables the segmentation accuracy to be retained during real-time operation. We benchmarked our method by creating an RGB-Thermal dataset in which thermal and RGB images are combined. We showed that the segmentation accuracy was significantly increased by adding thermal infrared information.

PDF

Datasets


Introduced in the Paper:

MFNet

Used in the Paper:

PST900
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Semantic Segmentation GAMUS MFNet mIoU 52.73 # 6
Thermal Image Segmentation KP day-night MFNet mIoU 24.0 # 5
Thermal Image Segmentation MFN Dataset MFNet mIOU 39.7 # 47
Thermal Image Segmentation Noisy RS RGB-T Dataset MFNet mIoU 33.1 # 6
Thermal Image Segmentation PST900 MFNet mIoU 57.0 # 16

Methods


No methods listed for this paper. Add relevant methods here