Search Results for author: Penghang Yin

Found 14 papers, 4 papers with code

COMQ: A Backpropagation-Free Algorithm for Post-Training Quantization

1 code implementation11 Mar 2024 Aozhong zhang, Zi Yang, Naigang Wang, Yingyong Qin, Jack Xin, Xin Li, Penghang Yin

Within a fixed layer, COMQ treats all the scaling factor(s) and bit-codes as the variables of the reconstruction error.

Quantization

Feature Affinity Assisted Knowledge Distillation and Quantization of Deep Neural Networks on Label-Free Data

no code implementations10 Feb 2023 Zhijian Li, Biao Yang, Penghang Yin, Yingyong Qi, Jack Xin

In this paper, we propose a feature affinity (FA) assisted knowledge distillation (KD) method to improve quantization-aware training of deep neural networks (DNN).

Knowledge Distillation Quantization

Recurrence of Optimum for Training Weight and Activation Quantized Networks

no code implementations10 Dec 2020 Ziang Long, Penghang Yin, Jack Xin

Deep neural networks (DNNs) are quantized for efficient inference on resource-constrained platforms.

Negation Quantization

Learning Quantized Neural Nets by Coarse Gradient Method for Non-linear Classification

no code implementations23 Nov 2020 Ziang Long, Penghang Yin, Jack Xin

In this paper, we propose a class of STEs with certain monotonicity, and consider their applications to the training of a two-linear-layer network with quantized activation functions for non-linear multi-category classification.

General Classification

Understanding Straight-Through Estimator in Training Activation Quantized Neural Nets

no code implementations ICLR 2019 Penghang Yin, Jiancheng Lyu, Shuai Zhang, Stanley Osher, Yingyong Qi, Jack Xin

We prove that if the STE is properly chosen, the expected coarse gradient correlates positively with the population gradient (not available for the training), and its negation is a descent direction for minimizing the population loss.

Negation

Non-ergodic Convergence Analysis of Heavy-Ball Algorithms

no code implementations5 Nov 2018 Tao Sun, Penghang Yin, Dongsheng Li, Chun Huang, Lei Guan, Hao Jiang

For objective functions satisfying a relaxed strongly convex condition, the linear convergence is established under weaker assumptions on the step size and inertial parameter than made in the existing literature.

Adversarial Defense via Data Dependent Activation Function and Total Variation Minimization

1 code implementation23 Sep 2018 Bao Wang, Alex T. Lin, Wei Zhu, Penghang Yin, Andrea L. Bertozzi, Stanley J. Osher

We improve the robustness of Deep Neural Net (DNN) to adversarial attacks by using an interpolating function as the output activation.

Adversarial Attack Adversarial Defense +1

Blended Coarse Gradient Descent for Full Quantization of Deep Neural Networks

no code implementations15 Aug 2018 Penghang Yin, Shuai Zhang, Jiancheng Lyu, Stanley Osher, Yingyong Qi, Jack Xin

We introduce the notion of coarse gradient and propose the blended coarse gradient descent (BCGD) algorithm, for training fully quantized neural networks.

Binarization Quantization

Laplacian Smoothing Gradient Descent

1 code implementation17 Jun 2018 Stanley Osher, Bao Wang, Penghang Yin, Xiyang Luo, Farzin Barekat, Minh Pham, Alex Lin

We propose a class of very simple modifications of gradient descent and stochastic gradient descent.

BinaryRelax: A Relaxation Approach For Training Deep Neural Networks With Quantized Weights

2 code implementations19 Jan 2018 Penghang Yin, Shuai Zhang, Jiancheng Lyu, Stanley Osher, Yingyong Qi, Jack Xin

We propose BinaryRelax, a simple two-phase algorithm, for training deep neural networks with quantized weights.

Quantization

Stochastic Backward Euler: An Implicit Gradient Descent Algorithm for $k$-means Clustering

no code implementations21 Oct 2017 Penghang Yin, Minh Pham, Adam Oberman, Stanley Osher

In this paper, we propose an implicit gradient descent algorithm for the classic $k$-means problem.

Clustering

Quantization and Training of Low Bit-Width Convolutional Neural Networks for Object Detection

no code implementations19 Dec 2016 Penghang Yin, Shuai Zhang, Yingyong Qi, Jack Xin

We present LBW-Net, an efficient optimization based method for quantization and training of the low bit-width convolutional neural networks (CNNs).

object-detection Object Detection +1

Cannot find the paper you are looking for? You can Submit a new open access paper.