Stripformer: Strip Transformer for Fast Image Deblurring

10 Apr 2022  ·  Fu-Jen Tsai, Yan-Tsung Peng, Yen-Yu Lin, Chung-Chi Tsai, Chia-Wen Lin ·

Images taken in dynamic scenes may contain unwanted motion blur, which significantly degrades visual quality. Such blur causes short- and long-range region-specific smoothing artifacts that are often directional and non-uniform, which is difficult to be removed. Inspired by the current success of transformers on computer vision and image processing tasks, we develop, Stripformer, a transformer-based architecture that constructs intra- and inter-strip tokens to reweight image features in the horizontal and vertical directions to catch blurred patterns with different orientations. It stacks interlaced intra-strip and inter-strip attention layers to reveal blur magnitudes. In addition to detecting region-specific blurred patterns of various orientations and magnitudes, Stripformer is also a token-efficient and parameter-efficient transformer model, demanding much less memory usage and computation cost than the vanilla transformer but works better without relying on tremendous training data. Experimental results show that Stripformer performs favorably against state-of-the-art models in dynamic scene deblurring.

PDF Abstract

Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Deblurring GoPro Stripformer PSNR 33.08 # 11
SSIM 0.962 # 13
Deblurring HIDE (trained on GOPRO) Stripformer PSNR (sRGB) 31.03 # 6
SSIM (sRGB) 0.94 # 7
Deblurring RealBlur-J Stripformer SSIM (sRGB) 0.929 # 3
PSNR (sRGB) 32.48 # 2
Params(M) 20 # 5
Deblurring RealBlur-R Stripformer PSNR (sRGB) 39.84 # 1
SSIM (sRGB) 0.974 # 1
Params 20 # 3


No methods listed for this paper. Add relevant methods here