no code implementations • 2 Jul 2023 • Kevin Bui, Fanghui Xue, Fredrick Park, Yingyong Qi, Jack Xin
This time-consuming, three-step process is a result of using subgradient descent to train CNNs.
no code implementations • 10 Feb 2023 • Zhijian Li, Biao Yang, Penghang Yin, Yingyong Qi, Jack Xin
In this paper, we propose a feature affinity (FA) assisted knowledge distillation (KD) method to improve quantization-aware training of deep neural networks (DNN).
no code implementations • 16 Apr 2022 • Fanghui Xue, Biao Yang, Yingyong Qi, Jack Xin
It has been shown by many researchers that transformers perform as well as convolutional neural networks in many computer vision tasks.
1 code implementation • 3 Oct 2020 • Kevin Bui, Fredrick Park, Shuai Zhang, Yingyong Qi, Jack Xin
Network slimming with T$\ell_1$ regularization also outperforms the latest Bayesian modification of network slimming in compressing a CNN architecture in terms of memory storage while preserving its model accuracy after channel pruning.
no code implementations • 10 Aug 2020 • Fanghui Xue, Yingyong Qi, Jack Xin
Differentiable architecture search (DARTS) is an effective method for data-driven neural network design based on solving a bilevel optimization problem.
1 code implementation • 30 Apr 2020 • Yuhui Xu, Yuxi Li, Shuai Zhang, Wei Wen, Botao Wang, Yingyong Qi, Yiran Chen, Weiyao Lin, Hongkai Xiong
The TRP trained network inherently has a low-rank structure, and is approximated with negligible performance loss, thus eliminating the fine-tuning process after low rank decomposition.
no code implementations • 17 Dec 2019 • Kevin Bui, Fredrick Park, Shuai Zhang, Yingyong Qi, Jack Xin
Deepening and widening convolutional neural networks (CNNs) significantly increases the number of trainable weight parameters by adding more convolutional layers and feature maps per layer, respectively.
no code implementations • 24 Oct 2019 • Eyasu Mequanint, Shuai Zhang, Bijan Forutanpour, Yingyong Qi, Ning Bi
To alleviate this issue, we propose a weakly-supervised method which utilizes the accurate annotation from the synthetic data set, to learn accurate degree of eye openness, and the weakly labeled (open or closed) real world eye data set to control the domain shift.
1 code implementation • 9 Oct 2019 • Yuhui Xu, Yuxi Li, Shuai Zhang, Wei Wen, Botao Wang, Wenrui Dai, Yingyong Qi, Yiran Chen, Weiyao Lin, Hongkai Xiong
To accelerate DNNs inference, low-rank approximation has been widely adopted because of its solid theoretical rationale and efficient implementations.
no code implementations • ICLR 2019 • Penghang Yin, Jiancheng Lyu, Shuai Zhang, Stanley Osher, Yingyong Qi, Jack Xin
We prove that if the STE is properly chosen, the expected coarse gradient correlates positively with the population gradient (not available for the training), and its negation is a descent direction for minimizing the population loss.
no code implementations • 24 Jan 2019 • Jiancheng Lyu, Shuai Zhang, Yingyong Qi, Jack Xin
In addition, we found experimentally that the standard convex relaxation of permutation matrices into stochastic matrices leads to poor performance.
1 code implementation • 20 Dec 2018 • Xin Li, Shuai Zhang, Bolan Jiang, Yingyong Qi, Mooi Choo Chuah, Ning Bi
A complex deep learning model with high accuracy runs slowly on resource-limited devices, while a light-weight model that runs much faster loses accuracy.
no code implementations • 6 Dec 2018 • Yuhui Xu, Shuai Zhang, Yingyong Qi, Jiaxian Guo, Weiyao Lin, Hongkai Xiong
Network quantization is an effective method for the deployment of neural networks on memory and energy constrained mobile devices.
1 code implementation • 6 Dec 2018 • Yuhui Xu, Yuxi Li, Shuai Zhang, Wei Wen, Botao Wang, Yingyong Qi, Yiran Chen, Weiyao Lin, Hongkai Xiong
We propose Trained Rank Pruning (TRP), which iterates low rank approximation and training.
no code implementations • 15 Aug 2018 • Penghang Yin, Shuai Zhang, Jiancheng Lyu, Stanley Osher, Yingyong Qi, Jack Xin
We introduce the notion of coarse gradient and propose the blended coarse gradient descent (BCGD) algorithm, for training fully quantized neural networks.
2 code implementations • 19 Jan 2018 • Penghang Yin, Shuai Zhang, Jiancheng Lyu, Stanley Osher, Yingyong Qi, Jack Xin
We propose BinaryRelax, a simple two-phase algorithm, for training deep neural networks with quantized weights.
no code implementations • 19 Dec 2016 • Penghang Yin, Shuai Zhang, Yingyong Qi, Jack Xin
We present LBW-Net, an efficient optimization based method for quantization and training of the low bit-width convolutional neural networks (CNNs).