Deep Multi-scale Convolutional Neural Network for Dynamic Scene Deblurring

CVPR 2017  ·  Seungjun Nah, Tae Hyun Kim, Kyoung Mu Lee ·

Non-uniform blind deblurring for general dynamic scenes is a challenging computer vision problem as blurs arise not only from multiple object motions but also from camera shake, scene depth variation. To remove these complicated motion blurs, conventional energy optimization based methods rely on simple assumptions such that blur kernel is partially uniform or locally linear. Moreover, recent machine learning based methods also depend on synthetic blur datasets generated under these assumptions. This makes conventional deblurring methods fail to remove blurs where blur kernel is difficult to approximate or parameterize (e.g. object motion boundaries). In this work, we propose a multi-scale convolutional neural network that restores sharp images in an end-to-end manner where blur is caused by various sources. Together, we present multi-scale loss function that mimics conventional coarse-to-fine approaches. Furthermore, we propose a new large-scale dataset that provides pairs of realistic blurry image and the corresponding ground truth sharp image that are obtained by a high-speed camera. With the proposed model trained on this dataset, we demonstrate empirically that our method achieves the state-of-the-art performance in dynamic scene deblurring not only qualitatively, but also quantitatively.

PDF Abstract CVPR 2017 PDF CVPR 2017 Abstract

Datasets


Introduced in the Paper:

GoPro

Used in the Paper:

Real Blur Dataset HIDE

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Image Deblurring GoPro Nah et al PSNR 29.08 # 47
SSIM 0.9135 # 43
Deblurring GoPro Nah et al PSNR 29.08 # 50
SSIM 0.9135 # 51
Deblurring HIDE (trained on GOPRO) Nah et al PSNR (sRGB) 25.73 # 25
Deblurring RealBlur-R (trained on GoPro) Nah et al SSIM (sRGB) 0.841 # 18

Methods


No methods listed for this paper. Add relevant methods here