Residual Learning for Effective joint Demosaicing-Denoising

14 Sep 2020  ·  Yu Guo, Qiyu Jin, Gabriele Facciolo, Tieyong Zeng, Jean-Michel Morel ·

Image demosaicking and denoising are the two key steps for color image production pipeline. The classical processing sequence consists of applying denoising first, and then demosaicking... However, this sequence leads to oversmoothing and unpleasant checkerboard effect. Moreover, it is very difficult to change this order, because once the image is demosaicked, the statistical properties of the noise will be changed dramatically. This is extremely challenging for traditional denoising models that strongly rely on statistical assumptions. In this paper, we attempt to tackle this prickly problem. Indeed, here we invert the traditional CFA processing pipeline by first demosaicking and then denoising. In the first stage, we design a demosaicking algorithm that combines traditional methods and a convolutional neural network (CNN) to reconstruct a full color image ignoring the noise. To improve the performance in image demosaicking, we modify an Inception architecture for fusing R, G and B three channels information. This stage retains all known information that is the key point to obtain pleasurable final results. After demosaicking, we get a noisy full-color image and use another CNN to learn the demosaicking residual noise (including artifacts) of it, that allows to obtain a restored full color image. Our proposed algorithm completely avoids the checkerboard effect and retains more image detail. Furthermore, it can process very high-level noise while the performances of other CNN based methods for noise higher than 20 are rather limited. Experimental results show clearly that our method outperforms state-of-the-art methods both quantitatively as well as in terms of visual quality. read more

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here