Search Results for author: Longcun Jin

Found 5 papers, 4 papers with code

Local-Global Fusion Network for Video Super-Resolution

1 code implementation IEEE Access 2020 Dewei Su, Hua Wang, Longcun Jin, Xianfang Sun, Xinyi Peng

The results on benchmark datasets of our LGFN are presented on https://github. com/BIOINSu/LGFN and the source code will be released as soon as the paper is accepted.

Optical Flow Estimation Video Super-Resolution

Unsupervised Real-world Image Super Resolution via Domain-distance Aware Training

1 code implementation CVPR 2021 Yunxuan Wei, Shuhang Gu, Yawei Li, Longcun Jin

The philosophy of off-the-shelf approaches lies in the augmentation of unpaired data, i. e. first generating synthetic low-resolution (LR) images $\mathcal{Y}^g$ corresponding to real-world high-resolution (HR) images $\mathcal{X}^r$ in the real-world LR domain $\mathcal{Y}^r$, and then utilizing the pseudo pairs $\{\mathcal{Y}^g, \mathcal{X}^r\}$ for training in a supervised manner.

Image Super-Resolution Philosophy

Fine-grained Attention and Feature-sharing Generative Adversarial Networks for Single Image Super-Resolution

1 code implementation25 Nov 2019 Yitong Yan, Chuangchuang Liu, Changyou Chen, Xianfang Sun, Longcun Jin, Xiang Zhou

Firstly, instead of producing a single score to discriminate images between real and fake, we propose a variant, called Fine-grained Attention Generative Adversarial Network for image super-resolution (FASRGAN), to discriminate each pixel between real and fake.

Generative Adversarial Network Image Super-Resolution +1

Deformable Non-local Network for Video Super-Resolution

1 code implementation24 Sep 2019 Hua Wang, Dewei Su, Chuangchuang Liu, Longcun Jin, Xianfang Sun, Xinyi Peng

The video super-resolution (VSR) task aims to restore a high-resolution (HR) video frame by using its corresponding low-resolution (LR) frame and multiple neighboring frames.

Optical Flow Estimation Video Super-Resolution

Cannot find the paper you are looking for? You can Submit a new open access paper.