Denoising
1907 papers with code • 5 benchmarks • 20 datasets
Denoising is a task in image processing and computer vision that aims to remove or reduce noise from an image. Noise can be introduced into an image due to various reasons, such as camera sensor limitations, lighting conditions, and compression artifacts. The goal of denoising is to recover the original image, which is considered to be noise-free, from a noisy observation.
( Image credit: Beyond a Gaussian Denoiser )
Libraries
Use these libraries to find Denoising models and implementationsSubtasks
Most implemented papers
TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning
Learning sentence embeddings often requires a large amount of labeled data.
Assessment of Data Consistency through Cascades of Independently Recurrent Inference Machines for fast and robust accelerated MRI reconstruction
Machine Learning methods can learn how to reconstruct Magnetic Resonance Images and thereby accelerate acquisition, which is of paramount importance to the clinical workflow.
Pseudo Numerical Methods for Diffusion Models on Manifolds
Under such a perspective, we propose pseudo numerical methods for diffusion models (PNDMs).
LLNet: A Deep Autoencoder Approach to Natural Low-light Image Enhancement
In surveillance, monitoring and tactical reconnaissance, gathering the right visual information from a dynamic environment and accurately processing such data are essential ingredients to making informed decisions which determines the success of an operation.
Noise2Void - Learning Denoising from Single Noisy Images
The field of image denoising is currently dominated by discriminative deep learning methods that are trained on pairs of noisy input and clean target images.
Improving Grammatical Error Correction via Pre-Training a Copy-Augmented Architecture with Unlabeled Data
It is the first time copying words from the source context and fully pre-training a sequence to sequence model are experimented on the GEC task.
Index Network
By viewing the indices as a function of the feature map, we introduce the concept of "learning to index", and present a novel index-guided encoder-decoder framework where indices are self-learned adaptively from data and are used to guide the downsampling and upsampling stages, without extra training supervision.
Pre-Trained Image Processing Transformer
To maximally excavate the capability of transformer, we present to utilize the well-known ImageNet benchmark for generating a large amount of corrupted image pairs.
An End-to-End Compression Framework Based on Convolutional Neural Networks
The second CNN, named reconstruction convolutional neural network (RecCNN), is used to reconstruct the decoded image with high-quality in the decoding end.
Multi-level Wavelet-CNN for Image Restoration
With the modified U-Net architecture, wavelet transform is introduced to reduce the size of feature maps in the contracting subnetwork.