Search Results for author: Jinjia Zhou

Found 9 papers, 7 papers with code

Transcoded Video Restoration by Temporal Spatial Auxiliary Network

1 code implementation15 Dec 2021 Li Xu, Gang He, Jinjia Zhou, Jie Lei, Weiying Xie, Yunsong Li, Yu-Wing Tai

In most video platforms, such as Youtube, and TikTok, the played videos usually have undergone multiple video encodings such as hardware encoding by recording devices, software encoding by video editing apps, and single/multiple video transcoding by video application servers.

Frame Video Restoration

Deep Photo Scan: Semi-Supervised Learning for dealing with the real-world degradation in Smartphone Photo Scanning

1 code implementation11 Feb 2021 Man M. Ho, Jinjia Zhou

Second, we simulate many different variants of the real-world degradation using low-level image transformation to gain a generalization in smartphone-scanned image properties, then train a degradation network to generalize all styles of degradation and provide pseudo-scanned photos for unscanned images as if they were scanned by a smartphone.

Image Enhancement

Image Compression with Encoder-Decoder Matched Semantic Segmentation

1 code implementation24 Jan 2021 Trinh Man Hoang, Jinjia Zhou, Yibo Fan

In recent years, layered image compression is demonstrated to be a promising direction, which encodes a compact representation of the input image and apply an up-sampling network to reconstruct the image.

Image Compression Semantic Segmentation

B-DRRN: A Block Information Constrained Deep Recursive Residual Network for Video Compression Artifacts Reduction

1 code implementation22 Jan 2021 Trinh Man Hoang, Jinjia Zhou

In this paper, we design a neural network to enhance the quality of the compressed frame by leveraging the block information, called B-DRRN (Deep Recursive Residual Network with Block information).

Frame Video Compression

Deep Preset: Blending and Retouching Photos with Color Style Transfer

1 code implementation21 Jul 2020 Man M. Ho, Jinjia Zhou

It is designed to 1) generalize the features representing the color transformation from content with natural colors to retouched reference, then blend it into the contextual features of content, 2) predict hyper-parameters (settings or preset) of the applied low-level color transformation methods, 3) stylize content to have a similar color style as reference.

Style Transfer

Semantic-driven Colorization

1 code implementation13 Jun 2020 Man M. Ho, Lu Zhang, Alexander Raake, Jinjia Zhou

As a human experience in colorization, our brains first detect and recognize the objects in the photo, then imagine their plausible colors based on many similar objects we have seen in real life, and finally colorize them, as described in the teaser.


Cannot find the paper you are looking for? You can Submit a new open access paper.