Multi-scale convolutional neural network for multi-focus image fusion

5 Mar 2019  ·  Hafiz Tayyab Mustafa, Jie Yang, Masoumeh Zareapoor ·

In this study, we present new deep learning (DL) method for fusing multi-focus images. Current multi-focus image fusion (MFIF) approaches based on DL methods mainly treat MFIF as a classification task. These methods use a convolutional neural network (CNN) as a classifier to identify pixels as focused or defocused pixels. How- ever, due to unavailability of labeled data to train networks, existing DL-based supervised models for MFIF add Gaussian blur in focused images to produce training data. DL-based unsupervised models are also too simple and only applicable to perform fusion tasks other than MFIF. To address the above issues, we proposed a new MFIF method, which aims to learn feature extraction, fusion and reconstruction components together to produce a complete unsupervised end-to-end trainable deep CNN. To enhance the feature extraction capability of CNN, we introduce a Siamese multi-scale feature extraction module to achieve a promising performance. In our pro- posed network we applied multiscale convolutions along with skip connections to extract more useful common features from a multi-focus image pair. Instead of using basic loss functions to train the CNN, our model utilizes structure similarity (SSIM) measure as a training loss function. Moreover, the fused images are reconstructed in a multiscale manner to guarantee more accurate restoration of images. Our proposed model can process images with variable size during testing and validation. Experimental results on various test images validate that our pro- posed method yields better quality fused images that are superior to the fused images generated by compared state-of-the-art image fusion methods.

PDF
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here