Search Results for author: Prannoy Pilligundla

Found 7 papers, 3 papers with code

WAVEQ: GRADIENT-BASED DEEP QUANTIZATION OF NEURAL NETWORKS THROUGH SINUSOIDAL REGULARIZATION

1 code implementation1 Jan 2021 Ahmed T. Elthakeb, Prannoy Pilligundla, Tarek Elgindi, FatemehSadat Mireshghallah, Charles-Alban Deledalle, Hadi Esmaeilzadeh

We show how WaveQ balance compute efficiency and accuracy, and provide a heterogeneous bitwidth assignment for quantization of a large variety of deep networks (AlexNet, CIFAR-10, MobileNet, ResNet-18, ResNet-20, SVHN, and VGG-11) that virtually preserves the accuracy.

Quantization

WaveQ: Gradient-Based Deep Quantization of Neural Networks through Sinusoidal Adaptive Regularization

no code implementations29 Feb 2020 Ahmed T. Elthakeb, Prannoy Pilligundla, FatemehSadat Mireshghallah, Tarek Elgindi, Charles-Alban Deledalle, Hadi Esmaeilzadeh

We show how SINAREQ balance compute efficiency and accuracy, and provide a heterogeneous bitwidth assignment for quantization of a large variety of deep networks (AlexNet, CIFAR-10, MobileNet, ResNet-18, ResNet-20, SVHN, and VGG-11) that virtually preserves the accuracy.

Quantization

Chameleon: Adaptive Code Optimization for Expedited Deep Neural Network Compilation

1 code implementation ICLR 2020 Byung Hoon Ahn, Prannoy Pilligundla, Amir Yazdanbakhsh, Hadi Esmaeilzadeh

This solution dubbed Chameleon leverages reinforcement learning whose solution takes fewer steps to converge, and develops an adaptive sampling algorithm that not only focuses on the costly samples (real hardware measurements) on representative points but also uses a domain-knowledge inspired logic to improve the samples itself.

Reinforcement Learning and Adaptive Sampling for Optimized DNN Compilation

1 code implementation30 May 2019 Byung Hoon Ahn, Prannoy Pilligundla, Hadi Esmaeilzadeh

Further experiments also confirm that our adaptive sampling can even improve AutoTVM's simulated annealing by 4. 00x.

Clustering reinforcement-learning +1

SinReQ: Generalized Sinusoidal Regularization for Low-Bitwidth Deep Quantized Training

no code implementations4 May 2019 Ahmed T. Elthakeb, Prannoy Pilligundla, Hadi Esmaeilzadeh

To further mitigate this loss, we propose a novel sinusoidal regularization, called SinReQ1, for deep quantized training.

Quantization

ReLeQ: A Reinforcement Learning Approach for Deep Quantization of Neural Networks

no code implementations5 Nov 2018 Ahmed T. Elthakeb, Prannoy Pilligundla, FatemehSadat Mireshghallah, Amir Yazdanbakhsh, Hadi Esmaeilzadeh

We show how ReLeQ can balance speed and quality, and provide an asymmetric general solution for quantization of a large variety of deep networks (AlexNet, CIFAR-10, LeNet, MobileNet-V1, ResNet-20, SVHN, and VGG-11) that virtually preserves the accuracy (=< 0. 3% loss) while minimizing the computation and storage cost.

Quantization reinforcement-learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.