Search Results for author: Jie Ran

Found 6 papers, 2 papers with code

Hundred-Kilobyte Lookup Tables for Efficient Single-Image Super-Resolution

no code implementations11 Dec 2023 Binxiao Huang, Jason Chun Lok Li, Jie Ran, Boyu Li, Jiajun Zhou, Dahai Yu, Ngai Wong

Conventional super-resolution (SR) schemes make heavy use of convolutional neural networks (CNNs), which involve intensive multiply-accumulate (MAC) operations, and require specialized hardware such as graphics processing units.

Image Super-Resolution

Lite it fly: An All-Deformable-Butterfly Network

no code implementations14 Nov 2023 Rui Lin, Jason Chun Lok Li, Jiajun Zhou, Binxiao Huang, Jie Ran, Ngai Wong

Most deep neural networks (DNNs) consist fundamentally of convolutional and/or fully connected layers, wherein the linear transform can be cast as the product between a filter matrix and a data matrix obtained by arranging feature tensors into columns.

PECAN: A Product-Quantized Content Addressable Memory Network

no code implementations13 Aug 2022 Jie Ran, Rui Lin, Jason Chun Lok Li, Jiajun Zhou, Ngai Wong

A novel deep neural network (DNN) architecture is proposed wherein the filtering and linear transform are realized solely with product quantization (PQ).

Quantization

Deformable Butterfly: A Highly Structured and Sparse Linear Transform

1 code implementation NeurIPS 2021 Rui Lin, Jie Ran, King Hung Chiu, Graziano Chesi, Ngai Wong

We introduce a new kind of linear transform named Deformable Butterfly (DeBut) that generalizes the conventional butterfly matrices and can be adapted to various input-output dimensions.

Exploiting Elasticity in Tensor Ranks for Compressing Neural Networks

no code implementations10 May 2021 Jie Ran, Rui Lin, Hayden K. H. So, Graziano Chesi, Ngai Wong

Elasticities in depth, width, kernel size and resolution have been explored in compressing deep neural networks (DNNs).

EZCrop: Energy-Zoned Channels for Robust Output Pruning

1 code implementation8 May 2021 Rui Lin, Jie Ran, Dongpeng Wang, King Hung Chiu, Ngai Wong

Recent results have revealed an interesting observation in a trained convolutional neural network (CNN), namely, the rank of a feature map channel matrix remains surprisingly constant despite the input images.

Cannot find the paper you are looking for? You can Submit a new open access paper.