no code implementations • 18 Jun 2018 • Zecheng He, Aswin Raghavan, Guangyuan Hu, Sek Chai, Ruby Lee
Specifically, we first train a temporal deep learning model, using only normal HPC readings from legitimate processes that run daily in these power-grid systems, to model the normal behavior of the power-grid controller.
no code implementations • 16 Aug 2017 • Aswin Raghavan, Mohamed Amer, Sek Chai, Graham Taylor
The parameters of neural networks are usually unconstrained and have a dynamic range dispersed over all real values.
no code implementations • 27 Mar 2017 • Aswin Raghavan, Mohamed Amer, Timothy Shields, David Zhang, Sek Chai
GPU activity prediction is an important and complex problem.
no code implementations • 24 Mar 2017 • Sek Chai, Aswin Raghavan, David Zhang, Mohamed Amer, Tim Shields
In this paper, we present a unique approach using lower precision weights for more efficient and faster training phase.
no code implementations • 12 Nov 2018 • Samyak Parajuli, Aswin Raghavan, Sek Chai
The use of deep neural networks in edge computing devices hinges on the balance between accuracy and complexity of computations.
no code implementations • 29 Nov 2018 • Kilho Son, Jesse Hostetler, Sek Chai
Complex image processing and computer vision systems often consist of a processing pipeline of functional modules.
no code implementations • ICLR 2018 • Mohamed Amer, Aswin Raghavan, Graham W. Taylor, Sek Chai
Our key idea is to control the expressive power of the network by dynamically quantizing the range and set of values that the parameters can take.
no code implementations • 22 Feb 2019 • Aswin Raghavan, Jesse Hostetler, Sek Chai
Our research is focused on understanding and applying biological memory transfers to new AI systems that can fundamentally improve their performance, throughout their fielded lifetime experience.
no code implementations • 7 Oct 2019 • Prateeth Nayak, David Zhang, Sek Chai
Quantization for deep neural networks have afforded models for edge devices that use less on-board memory and enable efficient low-power inference.
no code implementations • 1 Nov 2020 • Hengyue Liu, Samyak Parajuli, Jesse Hostetler, Sek Chai, Bir Bhanu
Conditional computation for Deep Neural Networks (DNNs) reduce overall computational load and improve model accuracy by running a subset of the network.
no code implementations • 4 Nov 2020 • Thu Dinh, Andrey Melnikov, Vasilios Daskalopoulos, Sek Chai
Quantization for deep neural networks (DNN) have enabled developers to deploy models with less memory and more efficient low-power inference.
no code implementations • 10 Mar 2021 • Sedigh Ghamari, Koray Ozcan, Thu Dinh, Andrey Melnikov, Juan Carvajal, Jan Ernst, Sek Chai
We propose a Quantization Guided Training (QGT) method to guide DNN training towards optimized low-bit-precision targets and reach extreme compression levels below 8-bit precision.