MFIF-GAN: A New Generative Adversarial Network for Multi-Focus Image Fusion

21 Sep 2020  ·  Yicheng Wang, Shuang Xu, Junmin Liu, Zixiang Zhao, Chun-Xia Zhang, Jiangshe Zhang ·

Multi-Focus Image Fusion (MFIF) is a promising image enhancement technique to obtain all-in-focus images meeting visual needs and it is a precondition of other computer vision tasks. One of the research trends of MFIF is to avoid the defocus spread effect (DSE) around the focus/defocus boundary (FDB). In this paper,we propose a network termed MFIF-GAN to attenuate the DSE by generating focus maps in which the foreground region are correctly larger than the corresponding objects. The Squeeze and Excitation Residual module is employed in the network. By combining the prior knowledge of training condition, this network is trained on a synthetic dataset based on an {\alpha}-matte model. In addition, the reconstruction and gradient regularization terms are combined in the loss functions to enhance the boundary details and improve the quality of fused images. Extensive experiments demonstrate that the MFIF-GAN outperforms several state-of-the-art (SOTA) methods in visual perception, quantitative analysis as well as efficiency. Moreover, the edge diffusion and contraction module is firstly proposed to verify that focus maps generated by our method are accurate at the pixel level.

PDF Abstract

Datasets


Introduced in the Paper:

alpha-matte MFIF dataset

Used in the Paper:

PASCAL VOC

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods