Search Results for author: Yaojun Wu

Found 7 papers, 1 papers with code

QVRF: A Quantization-error-aware Variable Rate Framework for Learned Image Compression

6 code implementations10 Mar 2023 Kedeng Tong, Yaojun Wu, Yue Li, Kai Zhang, Li Zhang, Xin Jin

In this paper, we present a Quantization-error-aware Variable Rate Framework (QVRF) that utilizes a univariate quantization regulator a to achieve wide-range variable rates within a single model.

Image Compression Quantization

A Dataset and Method for Hallux Valgus Angle Estimation Based on Deep Learing

no code implementations8 Jul 2021 Ningyuan Xu, Jiayan Zhuang, Yaojun Wu, Jiangjian Xiao

Angular measurements is essential to make a resonable treatment for Hallux valgus (HV), a common forefoot deformity.

Pose Estimation regression

Learned Block-based Hybrid Image Compression

no code implementations17 Dec 2020 Yaojun Wu, Xin Li, Zhizheng Zhang, Xin Jin, Zhibo Chen

Recent works on learned image compression perform encoding and decoding processes in a full-resolution manner, resulting in two problems when deployed for practical applications.

Blocking Image Compression +2

FAN: Frequency Aggregation Network for Real Image Super-resolution

no code implementations30 Sep 2020 Yingxue Pang, Xin Li, Xin Jin, Yaojun Wu, Jianzhao Liu, Sen Liu, Zhibo Chen

Specifically, we extract different frequencies of the LR image and pass them to a channel attention-grouped residual dense network (CA-GRDB) individually to output corresponding feature maps.

Image Super-Resolution SSIM

Generative Memorize-Then-Recall framework for low bit-rate Surveillance Video Compression

no code implementations30 Dec 2019 Yaojun Wu, Tianyu He, Zhibo Chen

In this paper, we figure out this issue by disentangling surveillance video into the structure of a global spatio-temporal feature (memory) for Group of Picture (GoP) and skeleton for each frame (clue).

Generative Adversarial Network Motion Compensation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.