Infrared And Visible Image Fusion
30 papers with code • 0 benchmarks • 4 datasets
Image fusion with paired infrared and visible images
Benchmarks
These leaderboards are used to track progress in Infrared And Visible Image Fusion
Latest papers with no code
MMA-UNet: A Multi-Modal Asymmetric UNet Architecture for Infrared and Visible Image Fusion
We separately trained specialized feature encoders for different modal and implemented a cross-scale fusion strategy to maintain the features from different modalities within the same representation space, ensuring a balanced information fusion process.
HDDGAN: A Heterogeneous Dual-Discriminator Generative Adversarial Network for Infrared and Visible Image Fusion
Consequently, fusion outcomes frequently entail a compromise between thermal target area information and texture details.
MaeFuse: Transferring Omni Features with Pretrained Masked Autoencoders for Infrared and Visible Image Fusion via Guided Training
Instead of being driven by downstream tasks, our model utilizes a pretrained encoder from Masked Autoencoders (MAE), which facilities the omni features extraction for low-level reconstruction and high-level vision tasks, to obtain perception friendly features with a low cost.
FusionMamba: Dynamic Feature Enhancement for Multimodal Image Fusion with Mamba
In this paper, we propose FusionMamba, a novel dynamic feature enhancement method for multimodal image fusion with Mamba.
Dual-modal Prior Semantic Guided Infrared and Visible Image Fusion for Intelligent Transportation System
Therefore, we propose a novel prior semantic guided image fusion method based on the dual-modality strategy, improving the performance of IVF in ITS.
Decomposition-based and Interference Perception for Infrared and Visible Image Fusion in Complex Scenes
Infrared and visible image fusion has emerged as a prominent research in computer vision.
Rethinking Cross-Attention for Infrared and Visible Image Fusion
The DIIM is designed by modifying the vanilla cross-attention mechanism, which can promote the extraction of the discrepancy information of the source images.
From Text to Pixels: A Context-Aware Semantic Synergy Solution for Infrared and Visible Image Fusion
With the rapid progression of deep learning technologies, multi-modality image fusion has become increasingly prevalent in object detection tasks.
Graph Representation Learning for Infrared and Visible Image Fusion
Then, GCNs are performed on the concatenate intra-modal NLss features of infrared and visible images, which can explore the cross-domain NLss of inter-modal to reconstruct the fused image.
IAIFNet: An Illumination-Aware Infrared and Visible Image Fusion Network
Infrared and visible image fusion (IVIF) is used to generate fusion images with comprehensive features of both images, which is beneficial for downstream vision tasks.