Infrared and visible image fusion using Latent Low-Rank Representation

24 Apr 2018  ·  Hui Li, Xiao-Jun Wu ·

Infrared and visible image fusion is an important problem in the field of image fusion which has been applied widely in many fields. To better preserve the useful information from source images, in this paper, we propose a novel image fusion method based on latent low-rank representation(LatLRR) which is simple and effective. Firstly, the source images are decomposed into low-rank parts(global structure) and salient parts(local structure) by LatLRR. Then, the low-rank parts are fused by weighted-average strategy to preserve more contour information. Then, the salient parts are simply fused by sum strategy which is a efficient operation in this fusion framework. Finally, the fused image is obtained by combining the fused low-rank part and the fused salient part. Compared with other fusion methods experimentally, the proposed method has better fusion performance than state-of-the-art fusion methods in both subjective and objective evaluation. The Code of our fusion method is available at https://github.com/hli1221/imagefusion\_Infrared\_visible\_latlrr

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here