Infrared And Visible Image Fusion
30 papers with code • 0 benchmarks • 4 datasets
Image fusion with paired infrared and visible images
Benchmarks
These leaderboards are used to track progress in Infrared And Visible Image Fusion
Latest papers
DetFusion: A Detection-driven Infrared and Visible Image Fusion Network
We cascade the image fusion network with the detection networks of both modalities and use the detection loss of the fused images to provide guidance on task-related information for the optimization of the image fusion network.
PIAFusion: A progressive infrared and visible image fusion network based on illumination aware
Moreover, we utilize the illumination probability to construct an illumination-aware loss to guide the training of the fusion network.
Unsupervised Misaligned Infrared and Visible Image Fusion via Cross-Modality Image Generation and Registration
Moreover, to better fuse the registered infrared images and visible images, we present a feature Interaction Fusion Module (IFM) to adaptively select more meaningful features for fusion in the Dual-path Interaction Fusion Network (DIFN).
Target-aware Dual Adversarial Learning and a Multi-scenario Multi-Modality Benchmark to Fuse Infrared and Visible for Object Detection
This study addresses the issue of fusing infrared and visible images that appear differently for object detection.
Fusing Event-based and RGB camera for Robust Object Detection in Adverse Conditions
The ability to detect objects, under image corruptions and different weather conditions is vital for deep learning models especially when applied to real-world applications such as autonomous driving.
Infrared and Visible Image Fusion via Interactive Compensatory Attention Adversarial Learning
The existing generative adversarial fusion methods generally concatenate source images and extract local features through convolution operation, without considering their global characteristics, which tends to produce an unbalanced result and is biased towards the infrared image or visible image.
Multispectral image fusion based on super pixel segmentation
This paper focuses on the task of fusing color (RGB) and near-infrared (NIR) images as this the typical RGBT sensors, as in multispectral cameras for detection, fusion, and dehazing.
Physics Driven Deep Retinex Fusion for Adaptive Infrared and Visible Image Fusion
In this study, we show that, the structures of generative networks capture a great deal of image feature priors, and then these priors are sufficient to reconstruct high-quality fused super-resolution result using only low-resolution inputs.
LLVIP: A Visible-infrared Paired Dataset for Low-light Vision
It is very challenging for various visual tasks such as image fusion, pedestrian detection and image-to-image translation in low light conditions due to the loss of effective target areas.
RFN-Nest: An end-to-end residual fusion network for infrared and visible images
The most difficult part of the design is to choose an appropriate strategy to generate the fused image for a specific task in hand.