Search Results for author: Yingyong Qi

Found 17 papers, 6 papers with code

A Proximal Algorithm for Network Slimming

no code implementations2 Jul 2023 Kevin Bui, Fanghui Xue, Fredrick Park, Yingyong Qi, Jack Xin

This time-consuming, three-step process is a result of using subgradient descent to train CNNs.

Feature Affinity Assisted Knowledge Distillation and Quantization of Deep Neural Networks on Label-Free Data

no code implementations10 Feb 2023 Zhijian Li, Biao Yang, Penghang Yin, Yingyong Qi, Jack Xin

In this paper, we propose a feature affinity (FA) assisted knowledge distillation (KD) method to improve quantization-aware training of deep neural networks (DNN).

Knowledge Distillation Quantization

Searching Intrinsic Dimensions of Vision Transformers

no code implementations16 Apr 2022 Fanghui Xue, Biao Yang, Yingyong Qi, Jack Xin

It has been shown by many researchers that transformers perform as well as convolutional neural networks in many computer vision tasks.

Image Classification object-detection +1

Improving Network Slimming with Nonconvex Regularization

1 code implementation3 Oct 2020 Kevin Bui, Fredrick Park, Shuai Zhang, Yingyong Qi, Jack Xin

Network slimming with T$\ell_1$ regularization also outperforms the latest Bayesian modification of network slimming in compressing a CNN architecture in terms of memory storage while preserving its model accuracy after channel pruning.

Image Classification object-detection +3

RARTS: An Efficient First-Order Relaxed Architecture Search Method

no code implementations10 Aug 2020 Fanghui Xue, Yingyong Qi, Jack Xin

Differentiable architecture search (DARTS) is an effective method for data-driven neural network design based on solving a bilevel optimization problem.

Bilevel Optimization Network Pruning

TRP: Trained Rank Pruning for Efficient Deep Neural Networks

1 code implementation30 Apr 2020 Yuhui Xu, Yuxi Li, Shuai Zhang, Wei Wen, Botao Wang, Yingyong Qi, Yiran Chen, Weiyao Lin, Hongkai Xiong

The TRP trained network inherently has a low-rank structure, and is approximated with negligible performance loss, thus eliminating the fine-tuning process after low rank decomposition.

$\ell_0$ Regularized Structured Sparsity Convolutional Neural Networks

no code implementations17 Dec 2019 Kevin Bui, Fredrick Park, Shuai Zhang, Yingyong Qi, Jack Xin

Deepening and widening convolutional neural networks (CNNs) significantly increases the number of trainable weight parameters by adding more convolutional layers and feature maps per layer, respectively.

Weakly-Supervised Degree of Eye-Closeness Estimation

no code implementations24 Oct 2019 Eyasu Mequanint, Shuai Zhang, Bijan Forutanpour, Yingyong Qi, Ning Bi

To alleviate this issue, we propose a weakly-supervised method which utilizes the accurate annotation from the synthetic data set, to learn accurate degree of eye openness, and the weakly labeled (open or closed) real world eye data set to control the domain shift.

Trained Rank Pruning for Efficient Deep Neural Networks

1 code implementation9 Oct 2019 Yuhui Xu, Yuxi Li, Shuai Zhang, Wei Wen, Botao Wang, Wenrui Dai, Yingyong Qi, Yiran Chen, Weiyao Lin, Hongkai Xiong

To accelerate DNNs inference, low-rank approximation has been widely adopted because of its solid theoretical rationale and efficient implementations.

Understanding Straight-Through Estimator in Training Activation Quantized Neural Nets

no code implementations ICLR 2019 Penghang Yin, Jiancheng Lyu, Shuai Zhang, Stanley Osher, Yingyong Qi, Jack Xin

We prove that if the STE is properly chosen, the expected coarse gradient correlates positively with the population gradient (not available for the training), and its negation is a descent direction for minimizing the population loss.

Negation

AutoShuffleNet: Learning Permutation Matrices via an Exact Lipschitz Continuous Penalty in Deep Convolutional Neural Networks

no code implementations24 Jan 2019 Jiancheng Lyu, Shuai Zhang, Yingyong Qi, Jack Xin

In addition, we found experimentally that the standard convex relaxation of permutation matrices into stochastic matrices leads to poor performance.

Graph Matching

DAC: Data-free Automatic Acceleration of Convolutional Networks

1 code implementation20 Dec 2018 Xin Li, Shuai Zhang, Bolan Jiang, Yingyong Qi, Mooi Choo Chuah, Ning Bi

A complex deep learning model with high accuracy runs slowly on resource-limited devices, while a light-weight model that runs much faster loses accuracy.

Image Classification Multi-Person Pose Estimation +2

DNQ: Dynamic Network Quantization

no code implementations6 Dec 2018 Yuhui Xu, Shuai Zhang, Yingyong Qi, Jiaxian Guo, Weiyao Lin, Hongkai Xiong

Network quantization is an effective method for the deployment of neural networks on memory and energy constrained mobile devices.

Quantization

Trained Rank Pruning for Efficient Deep Neural Networks

1 code implementation6 Dec 2018 Yuhui Xu, Yuxi Li, Shuai Zhang, Wei Wen, Botao Wang, Yingyong Qi, Yiran Chen, Weiyao Lin, Hongkai Xiong

We propose Trained Rank Pruning (TRP), which iterates low rank approximation and training.

Quantization

Blended Coarse Gradient Descent for Full Quantization of Deep Neural Networks

no code implementations15 Aug 2018 Penghang Yin, Shuai Zhang, Jiancheng Lyu, Stanley Osher, Yingyong Qi, Jack Xin

We introduce the notion of coarse gradient and propose the blended coarse gradient descent (BCGD) algorithm, for training fully quantized neural networks.

Binarization Quantization

BinaryRelax: A Relaxation Approach For Training Deep Neural Networks With Quantized Weights

2 code implementations19 Jan 2018 Penghang Yin, Shuai Zhang, Jiancheng Lyu, Stanley Osher, Yingyong Qi, Jack Xin

We propose BinaryRelax, a simple two-phase algorithm, for training deep neural networks with quantized weights.

Quantization

Quantization and Training of Low Bit-Width Convolutional Neural Networks for Object Detection

no code implementations19 Dec 2016 Penghang Yin, Shuai Zhang, Yingyong Qi, Jack Xin

We present LBW-Net, an efficient optimization based method for quantization and training of the low bit-width convolutional neural networks (CNNs).

object-detection Object Detection +1

Cannot find the paper you are looking for? You can Submit a new open access paper.