Search Results for author: Ri Cheng

Found 6 papers, 1 papers with code

Low-latency Space-time Supersampling for Real-time Rendering

1 code implementation18 Dec 2023 Ruian He, Shili Zhou, Yuqi Sun, Ri Cheng, Weimin Tan, Bo Yan

With the rise of real-time rendering and the evolution of display devices, there is a growing demand for post-processing methods that offer high-resolution content in a high frame rate.

Context-Aware Iteration Policy Network for Efficient Optical Flow Estimation

no code implementations12 Dec 2023 Ri Cheng, Ruian He, Xuhao Jiang, Shili Zhou, Weimin Tan, Bo Yan

In this paper, we develop a Context-Aware Iteration Policy Network for efficient optical flow estimation, which determines the optimal number of iterations per sample.

Optical Flow Estimation

Uncertainty-Guided Spatial Pruning Architecture for Efficient Frame Interpolation

no code implementations31 Jul 2023 Ri Cheng, Xuhao Jiang, Ruian He, Shili Zhou, Weimin Tan, Bo Yan

We can use dynamic spatial pruning method to skip redundant computation, but this method cannot properly identify easy regions in VFI tasks without supervision.

Video Frame Interpolation

Geometry-Aware Reference Synthesis for Multi-View Image Super-Resolution

no code implementations18 Jul 2022 Ri Cheng, Yuqi Sun, Bo Yan, Weimin Tan, Chenxi Ma

To address these problems, we propose the MVSRnet, which uses geometry information to extract sharp details from all LR multi-view to support the SR of the LR input view.

Image Super-Resolution Video Super-Resolution

Learning Parallax Transformer Network for Stereo Image JPEG Artifacts Removal

no code implementations15 Jul 2022 Xuhao Jiang, Weimin Tan, Ri Cheng, Shili Zhou, Bo Yan

Under stereo settings, the performance of image JPEG artifacts removal can be further improved by exploiting the additional information provided by a second view.

Learning Robust Image-Based Rendering on Sparse Scene Geometry via Depth Completion

no code implementations CVPR 2022 Yuqi Sun, Shili Zhou, Ri Cheng, Weimin Tan, Bo Yan, Lang Fu

Specifically, GR stage takes sparse depth map and RGB as input to predict dense depth map by exploiting the correlation between two modals.

Depth Completion

Cannot find the paper you are looking for? You can Submit a new open access paper.