MLPD: Multi-Label Pedestrian Detector in Multispectral Domain

Multispectral pedestrian detection has been actively studied as a promising multi-modality solution to handle illumination and weather changes. Most multi-modality approaches carry the assumption that all inputs are fully-overlapped. However, these kinds of data pairs are not common in practical applications due to the complexity of the existing sensor configuration. In this letter, we tackle multispectral pedestrian detection, where all input data are not paired. To this end, we propose a novel single-stage detection framework that leverages multi-label learning to learn input state-aware features by assigning a separate label according to the given state of the input image pair. We also present a novel augmentation strategy by applying geometric transformations to synthesize the unpaired multispectral images. In extensive experiments, we demonstrate the efficacy of the proposed method under various real-world conditions, such as fully-overlapped images and partially-overlapped images, in stereo-vision. Code and a demonstration video are available at https://github.com/sejong-rcv/MLPD-Multi-Label-Pedestrian-Detection.

PDF
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Multispectral Object Detection KAIST Multispectral Pedestrian Detection Benchmark MLPD Reasonable Miss Rate 7.58 # 4

Methods


No methods listed for this paper. Add relevant methods here