Search Results for author: Chenyang Ge

Found 6 papers, 1 papers with code

RGB Guided ToF Imaging System: A Survey of Deep Learning-based Methods

no code implementations16 May 2024 Xin Qiao, Matteo Poggi, Pengchao Deng, Hao Wei, Chenyang Ge, Stefano Mattoccia

Integrating an RGB camera into a ToF imaging system has become a significant technique for perceiving the real world.

Depth Completion Face Anti-Spoofing +3

Towards Extreme Image Compression with Latent Feature Guidance and Diffusion Prior

no code implementations29 Apr 2024 Zhiyuan Li, Yanhui Zhou, Hao Wei, Chenyang Ge, Jingwen Jiang

In this work, we propose a novel two-stage extreme image compression framework that exploits the powerful generative capability of pre-trained diffusion models to achieve realistic image reconstruction at extremely low bitrates.

Image Compression Image Reconstruction

Traditional Transformation Theory Guided Model for Learned Image Compression

no code implementations24 Feb 2024 Zhiyuan Li, Chenyang Ge, Shun Li

Recently, many deep image compression methods have been proposed and achieved remarkable performance.

Image Compression

Depth Super-Resolution from Explicit and Implicit High-Frequency Features

no code implementations16 Mar 2023 Xin Qiao, Chenyang Ge, Youmin Zhang, Yanhui Zhou, Fabio Tosi, Matteo Poggi, Stefano Mattoccia

We propose a novel multi-stage depth super-resolution network, which progressively reconstructs high-resolution depth maps from explicit and implicit high-frequency features.

Super-Resolution Vocal Bursts Intensity Prediction

Rethinking Blur Synthesis for Deep Real-World Image Deblurring

no code implementations28 Sep 2022 Hao Wei, Chenyang Ge, Xin Qiao, Pengchao Deng

In this paper, we examine the problem of real-world image deblurring and take into account two key factors for improving the performance of the deep image deblurring model, namely, training data synthesis and network architecture design.

Deblurring Image Deblurring

Cannot find the paper you are looking for? You can Submit a new open access paper.