[Re] Spatial-Adaptive Network for Single Image Denoising

Scope of Reproducibility
The original paper proposes an encoder-decoder structure exploiting a residual spatialadaptive block and a context block to capture multi-scale information for achieving the state-of-the-art on real and synthetic noise removal.

We have implemented the model, namely SADNet, from scratch in PyTorch as described in the paper, and also adopted the training loop and proposed blocks from the authorʼs code. Since the weight initialization of proposed blocks was not implicitly defined in the paper, we have decided to use the default initialization method for convolutional layers in PyTorch (i.e. Kaiming). Experiments have been completed on a single RTX 2080 Ti in 3 days for each, and it requires ∼3GB GPU memory for training, and ∼8GB CPU memory for loading the data, due to the file structure of datasets.

We have achieved to reproduce the results qualitatively and quantitatively on synthetic and noise removal tasks. SADNet has the capacity to learn to remove the synthetic and real noise in images, and it produces visually-plausible outputs even after a few epochs. Moreover, we have employed SSIM and PSNR metrics to measure the quantitative performance for all settings. The quantitative results on both tasks are on-par when compared to the reported results in the paper.

What was easy
The code was open-source, and implemented in PyTorch, hence adopting the training loop and proposed blocks to our implementation facilitated our reproduction study. The loss function is straightforward and the architecture has a U-Net-like structure, so that we could achieve to implement the architecture in a fair time.

What was difficult
Due to the lack of compatibility with the current versions of PyTorch and TorchVision and the dependency on an external CUDA implementation of deformable convolutions, we have encountered several issues during our implementation. Then, we have considered to re-implement residual spatial-adaptive block and context block from scratch for deferring these dependencies, however, we could not achieve it just by referring to the paper in limited time. Therefore, we have decided to directly use the provided blocks as in the authorʼs code.

Communication with original authors
We did not make any contact with the authors since we achieved to solve the issues encountered during the implementation of SADNet by examining the authorʼs code.

PDF Abstract


Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.


No methods listed for this paper. Add relevant methods here