Discriminative Learning of Iteration-Wise Priors for Blind Deconvolution

CVPR 2015  ·  Wangmeng Zuo, Dongwei Ren, Shuhang Gu, Liang Lin, Lei Zhang ·

The maximum a posterior (MAP)-based blind deconvolution framework generally involves two stages: blur kernel estimation and non-blind restoration. For blur kernel estimation, sharp edge prediction and carefully designed image priors are vital to the success of MAP. In this paper, we propose a blind deconvolution framework together with iteration specific priors for better blur kernel estimation. The family of hyper-Laplacian $( \Pr (\mathbf{d})\propto {{e}^{-{\left\| \mathbf{d} \right\|_{p}^{p}}/{\lambda }\;}})$ is adopted for modeling iteration-wise priors of image gradients, where each iteration has its own model parameters $\{\lambda^{(t)}, p^{(t)}\}$. To avoid heavy parameter tuning, all iteration-wise model parameters can be learned using our principled discriminative learning model from a training set, and can be directly applied to other dataset and real blurry images. Interestingly, with the generalized shrinkage / thresholding operator, negative $p$ value $(p < 0)$ is allowable and we find that it contributes more in estimating the coarse shape of blur kernel. Experimental results on synthetic and real world images demonstrate that our method achieves better deblurring results than the existing gradient prior-based methods. Compared with the state-of-the-art patch prior-based method, our method is competitive in restoration results but is much more efficient.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here