no code implementations • 26 Jan 2024 • Wen Ma, Qiuwen Lou, Arman Kazemi, Julian Faraone, Tariq Afzal
Video quality can suffer from limited internet speed while being streamed by users.
no code implementations • 1 Aug 2023 • Manasa Manohara, Sankalp Dayal, Tariq Afzal, Rahul Bakshi, Kahkuen Fu
Accordingly, deep learning models cannot be easily quantized for diverse fixed-point hardwares, mainly due to slightly different quantization requirements.
no code implementations • 12 May 2023 • Suhaila M. Shakiah, Rupak Vignesh Swaminathan, Hieu Duy Nguyen, Raviteja Chinta, Tariq Afzal, Nathan Susanj, Athanasios Mouchtaris, Grant P. Strimel, Ariya Rastrow
Machine learning model weights and activations are represented in full-precision during training.
no code implementations • 30 Jun 2022 • Kai Zhen, Hieu Duy Nguyen, Raviteja Chinta, Nathan Susanj, Athanasios Mouchtaris, Tariq Afzal, Ariya Rastrow
We present a novel sub-8-bit quantization-aware training (S8BQAT) scheme for 8-bit neural network accelerators.
no code implementations • CVPR 2017 • Chunpeng Wu, Wei Wen, Tariq Afzal, Yongmei Zhang, Yiran Chen, Hai Li
Our DNN has 4. 1M parameters, which is only 6. 7% of AlexNet or 59% of GoogLeNet.