Infrared And Visible Image Fusion
30 papers with code • 0 benchmarks • 4 datasets
Image fusion with paired infrared and visible images
Benchmarks
These leaderboards are used to track progress in Infrared And Visible Image Fusion
Latest papers
Infrared and visible Image Fusion with Language-driven Loss in CLIP Embedding Space
A language-driven fusion model is then constructed in the embedding space, by establishing the relationship among the embedded vectors to represent the fusion objective and input image modalities.
A Multi-scale Information Integration Framework for Infrared and Visible Image Fusion
In this study, we propose a multi-scale dual attention (MDA) framework for infrared and visible image fusion, which is designed to measure and integrate complementary information in both structure and loss function at the image and patch level.
PAIF: Perception-Aware Infrared-Visible Image Fusion for Attack-Tolerant Semantic Segmentation
We first conduct systematic analyses about the components of image fusion, investigating the correlation with segmentation robustness under adversarial perturbations.
Learning a Graph Neural Network with Cross Modality Interaction for Image Fusion
Finally, we merge all graph features to get the fusion result.
Cross-modal transformers for infrared and visible image fusion
In this work, we propose a cross-modal transformer-based fusion (CMTFusion) algorithm for infrared and visible image fusion that captures global interactions by faithfully extracting complementary information from source images.
An Interactively Reinforced Paradigm for Joint Infrared-Visible Image Fusion and Saliency Object Detection
Their common characteristic of seeking complementary cues from different source images motivates us to explore the collaborative relationship between Fusion and Salient object detection tasks on infrared and visible images via an Interactively Reinforced multi-task paradigm for the first time, termed IRFS.
Multi-modal Gated Mixture of Local-to-Global Experts for Dynamic Image Fusion
The MoLE performs specialized learning of multi-modal local features, prompting the fused images to retain the local information in a sample-adaptive manner, while the MoGE focuses on the global information that complements the fused image with overall texture detail and contrast.
MetaFusion: Infrared and Visible Image Fusion via Meta-Feature Embedding From Object Detection
Conversely, detection task furnishes object semantic information to improve the infrared and visible image fusion.
Breaking Free from Fusion Rule: A Fully Semantic-driven Infrared and Visible Image Fusion
To address these challenges, in this letter, we develop a semantic-level fusion network to sufficiently utilize the semantic guidance, emancipating the experimental designed fusion rules.
CoCoNet: Coupled Contrastive Learning Network with Multi-level Feature Ensemble for Multi-modality Image Fusion
Infrared and visible image fusion targets to provide an informative image by combining complementary information from different sensors.