Multiscale Sparsifying Transform Learning for Image Denoising

The data-driven sparse methods such as synthesis dictionary learning (e.g., K-SVD) and sparsifying transform learning have been proven effective in image denoising. However, they are intrinsically single-scale which can lead to suboptimal results. We propose two methods developed based on wavelet subbands mixing to efficiently combine the merits of both single and multiscale methods. We show that an efficient multiscale method can be devised without the need for denoising detail subbands which substantially reduces the runtime. The proposed methods are initially derived within the framework of sparsifying transform learning denoising, and then, they are generalized to propose our multiscale extensions for the well-known K-SVD and SAIST image denoising methods. We analyze and assess the studied methods thoroughly and compare them with the well-known and state-of-the-art methods. The experiments show that our methods are able to offer good trade-offs between performance and complexity.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here