Motion Aware Double Attention Network for Dynamic Scene Deblurring

Motion deblurring in dynamic scenes is a challenging task when the blurring is caused by one or a combination of various reasons such as moving objects, camera movement, etc. Since event cameras can detect changes in intensity with a low latency, necessary motion information is inherently captured in event data, which could be quite useful for deblurring standard camera images. The degradation intensity does not show homogeneity across an image due to factors like object depth, speed, etc. We propose a twobranch network structure, Motion Aware Double Attention Network (MADANet), that pays special attention to areas with high blur. As part of the network, event data is first used by the high blur region segmentation module that creates a probability-like score for areas exhibiting high relative motion to the camera. Then, the event data is also injected to feature maps in the main body, where there is a second attention mechanism available for each branch. The effective usage of event data and two-level attention mechanisms makes the network very compact. During the experiment, it was shown that the proposed network could achieve state-of-the-art performance not only on the benchmark dataset from GoPro, but also on two newly collected datasets, one of which contains real event data

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


Ranked #3 on Image Deblurring on GoPro (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Benchmark
Image Deblurring GoPro MADANet PSNR 33.84 # 3
SSIM 0.964 # 9
Params (M) 16.9 # 7
Deblurring GoPro MADANet PSNR 33.84 # 6
SSIM 0.964 # 10

Methods


No methods listed for this paper. Add relevant methods here