Target-aware Dual Adversarial Learning and a Multi-scenario Multi-Modality Benchmark to Fuse Infrared and Visible for Object Detection

This study addresses the issue of fusing infrared and visible images that appear differently for object detection. Aiming at generating an image of high visual quality, previous approaches discover commons underlying the two modalities and fuse upon the common space either by iterative optimization or deep networks. These approaches neglect that modality differences implying the complementary information are extremely important for both fusion and subsequent detection task. This paper proposes a bilevel optimization formulation for the joint problem of fusion and detection, and then unrolls to a target-aware Dual Adversarial Learning (TarDAL) network for fusion and a commonly used detection network. The fusion network with one generator and dual discriminators seeks commons while learning from differences, which preserves structural information of targets from the infrared and textural details from the visible. Furthermore, we build a synchronized imaging system with calibrated infrared and optical sensors, and collect currently the most comprehensive benchmark covering a wide range of scenarios. Extensive experiments on several public datasets and our benchmark demonstrate that our method outputs not only visually appealing fusion but also higher detection mAP than the state-of-the-art approaches.

PDF Abstract CVPR 2022 PDF CVPR 2022 Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Semantic Segmentation FMB Dataset TarDAL (RGB-Infrared) mIoU 48.10 # 8
Object Detection India Driving Dataset mAP@0.5 81.1 # 4
Object Detection Multispectral Dataset TarDAL mAP@0.5 78 # 1

Methods


No methods listed for this paper. Add relevant methods here