Search Results for author: Ning Kang

Found 9 papers, 3 papers with code

PILC: Practical Image Lossless Compression with an End-to-end GPU Oriented Neural Framework

no code implementations CVPR 2022 Ning Kang, Shanzhao Qiu, Shifeng Zhang, Zhenguo Li, Shutao Xia

Generative model based image lossless compression algorithms have seen a great success in improving compression ratio.

Split Hierarchical Variational Compression

no code implementations CVPR 2022 Tom Ryder, Chen Zhang, Ning Kang, Shifeng Zhang

Secondly, we define our coding framework, the autoregressive initial bits, that flexibly supports parallel coding and avoids -- for the first time -- many of the practicalities commonly associated with bits-back coding.

Image Compression

Parallel Neural Local Lossless Compression

2 code implementations13 Jan 2022 Mingtian Zhang, James Townsend, Ning Kang, David Barber

The recently proposed Neural Local Lossless Compression (NeLLoC), which is based on a local autoregressive model, has achieved state-of-the-art (SOTA) out-of-distribution (OOD) generalization performance in the image compression task.

Image Compression

iFlow: Numerically Invertible Flows for Efficient Lossless Compression via a Uniform Coder

no code implementations NeurIPS 2021 Shifeng Zhang, Ning Kang, Tom Ryder, Zhenguo Li

In this paper, we discuss lossless compression using normalizing flows which have demonstrated a great capacity for achieving high compression ratios.

NASOA: Towards Faster Task-oriented Online Fine-tuning with a Zoo of Models

no code implementations ICCV 2021 Hang Xu, Ning Kang, Gengwei Zhang, Chuanlong Xie, Xiaodan Liang, Zhenguo Li

Fine-tuning from pre-trained ImageNet models has been a simple, effective, and popular approach for various computer vision tasks.

Neural Architecture Search

NASOA: Towards Faster Task-oriented Online Fine-tuning

no code implementations1 Jan 2021 Hang Xu, Ning Kang, Gengwei Zhang, Xiaodan Liang, Zhenguo Li

The resulting model zoo is more training efficient than SOTA NAS models, e. g. 6x faster than RegNetY-16GF, and 1. 7x faster than EfficientNetB3.

Neural Architecture Search

Cannot find the paper you are looking for? You can Submit a new open access paper.