Mixed noise reduction via sparse error constraint representation of high frequency image for wildlife image

Wildlife image noise reduction is a difficult and challenging problem since the images are inevitably corrupted by the mixed noise in the complex field environments. Most of the existing denoising methods focused on the noise removal added manually to pure images. However, the noise of wildlife images taken in the field is random mixed noise; thus, the existing noise reduction algorithms are not suitable for the denoising of wildlife images. In this paper, we propose a novel mixed noise reduction method based on sparse error constraint representation for removing the wildlife image noise. Firstly, we use 2D-DCT method to decompose a noisy image into a high frequency image and a low frequency image, and then rank 2D-DCT coefficients based on Zig-zag sorting algorithm. As we know, the high frequency image contains more noise, so the dictionary learning model of high frequency image is established to recover the images corrupted by the mixed noise. The sparse error term describes the error between the sparse coefficients of the original image and those obtained by the error constraint method. And then we utilize the algorithm proximal alternating linearization minimization to solve the objective function due to the nonconvex and non-smooth minimization problem. In order to update the dictionary, we apply the lp- l1- norm term for sparse coding to obtain the optimal solution of sparse coefficients. The experiment results show that the proposed method has good noise reduction results for both the noisy images recorded in the wild and the images artificially corrupted by mixed noise while retain more details of the wildlife objects in the restored images.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here