no code implementations • 11 Dec 2023 • Binxiao Huang, Jason Chun Lok Li, Jie Ran, Boyu Li, Jiajun Zhou, Dahai Yu, Ngai Wong
Conventional super-resolution (SR) schemes make heavy use of convolutional neural networks (CNNs), which involve intensive multiply-accumulate (MAC) operations, and require specialized hardware such as graphics processing units.
no code implementations • 14 Nov 2023 • Rui Lin, Jason Chun Lok Li, Jiajun Zhou, Binxiao Huang, Jie Ran, Ngai Wong
Most deep neural networks (DNNs) consist fundamentally of convolutional and/or fully connected layers, wherein the linear transform can be cast as the product between a filter matrix and a data matrix obtained by arranging feature tensors into columns.
no code implementations • 13 Aug 2022 • Jie Ran, Rui Lin, Jason Chun Lok Li, Jiajun Zhou, Ngai Wong
A novel deep neural network (DNN) architecture is proposed wherein the filtering and linear transform are realized solely with product quantization (PQ).
1 code implementation • NeurIPS 2021 • Rui Lin, Jie Ran, King Hung Chiu, Graziano Chesi, Ngai Wong
We introduce a new kind of linear transform named Deformable Butterfly (DeBut) that generalizes the conventional butterfly matrices and can be adapted to various input-output dimensions.
no code implementations • 10 May 2021 • Jie Ran, Rui Lin, Hayden K. H. So, Graziano Chesi, Ngai Wong
Elasticities in depth, width, kernel size and resolution have been explored in compressing deep neural networks (DNNs).
1 code implementation • 8 May 2021 • Rui Lin, Jie Ran, Dongpeng Wang, King Hung Chiu, Ngai Wong
Recent results have revealed an interesting observation in a trained convolutional neural network (CNN), namely, the rank of a feature map channel matrix remains surprisingly constant despite the input images.