Learning-Based Video Coding with Joint Deep Compression and Enhancement

29 Nov 2021  ·  Tiesong Zhao, Weize Feng, Hongji Zeng, Yuzhen Niu, Jiaying Liu ·

The end-to-end learning-based video compression has attracted substantial attentions by paving another way to compress video signals as stacked visual features. This paper proposes an efficient end-to-end deep video codec with jointly optimized compression and enhancement modules (JCEVC). First, we propose a dual-path generative adversarial network (DPEG) to reconstruct video details after compression. An $\alpha$-path facilitates the structure information reconstruction with a large receptive field and multi-frame references, while a $\beta$-path facilitates the reconstruction of local textures. Both paths are fused and co-trained within a generative-adversarial process. Second, we reuse the DPEG network in both motion compensation and quality enhancement modules, which are further combined with other necessary modules to formulate our JCEVC framework. Third, we employ a joint training of deep video compression and enhancement that further improves the rate-distortion (RD) performance of compression. Compared with x265 LDP very fast mode, our JCEVC reduces the average bit-per-pixel (bpp) by 39.39\%/54.92\% at the same PSNR/MS-SSIM, which outperforms the state-of-the-art deep video codecs by a considerable margin.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here