Search Results for author: Thu Dinh

Found 4 papers, 1 papers with code

Quantization-Guided Training for Compact TinyML Models

no code implementations10 Mar 2021 Sedigh Ghamari, Koray Ozcan, Thu Dinh, Andrey Melnikov, Juan Carvajal, Jan Ernst, Sek Chai

We propose a Quantization Guided Training (QGT) method to guide DNN training towards optimized low-bit-precision targets and reach extreme compression levels below 8-bit precision.

Human Detection Quantization

Subtensor Quantization for Mobilenets

no code implementations4 Nov 2020 Thu Dinh, Andrey Melnikov, Vasilios Daskalopoulos, Sek Chai

Quantization for deep neural networks (DNN) have enabled developers to deploy models with less memory and more efficient low-power inference.

Image Classification Quantization

Sparsity Meets Robustness: Channel Pruning for the Feynman-Kac Formalism Principled Robust Deep Neural Nets

no code implementations2 Mar 2020 Thu Dinh, Bao Wang, Andrea L. Bertozzi, Stanley J. Osher

In this paper, we focus on a co-design of efficient DNN compression algorithms and sparse neural architectures for robust and accurate deep learning.

Convergence of a Relaxed Variable Splitting Coarse Gradient Descent Method for Learning Sparse Weight Binarized Activation Neural Networks

2 code implementations25 Jan 2019 Thu Dinh, Jack Xin

In this paper, we study the problem of coarse gradient descent (CGD) learning of a one hidden layer convolutional neural network (CNN) with binarized activation function and sparse weights.

Optimization and Control 90C26, 97R40, 68T05

Cannot find the paper you are looking for? You can Submit a new open access paper.