Real-time Deep Video Deinterlacing

1 Aug 2017  ·  Haichao Zhu, Xueting Liu, Xiangyu Mao, Tien-Tsin Wong ·

Interlacing is a widely used technique, for television broadcast and video recording, to double the perceived frame rate without increasing the bandwidth. But it presents annoying visual artifacts, such as flickering and silhouette "serration," during the playback. Existing state-of-the-art deinterlacing methods either ignore the temporal information to provide real-time performance but lower visual quality, or estimate the motion for better deinterlacing but with a trade-off of higher computational cost. In this paper, we present the first and novel deep convolutional neural networks (DCNNs) based method to deinterlace with high visual quality and real-time performance. Unlike existing models for super-resolution problems which relies on the translation-invariant assumption, our proposed DCNN model utilizes the temporal information from both the odd and even half frames to reconstruct only the missing scanlines, and retains the given odd and even scanlines for producing the full deinterlaced frames. By further introducing a layer-sharable architecture, our system can achieve real-time performance on a single GPU. Experiments shows that our method outperforms all existing methods, in terms of reconstruction accuracy and computational performance.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Video Deinterlacing MSU Deinterlacer Benchmark Real-time Deep Video Deinterlacing PSNR 38.374 # 18
SSIM 0.957 # 13
FPS on CPU 0.3 # 27
Subjective 0.543 # 11
VMAF 93.28 # 13

Methods