Search Results for author: Sek Chai

Found 12 papers, 0 papers with code

Quantization-Guided Training for Compact TinyML Models

no code implementations10 Mar 2021 Sedigh Ghamari, Koray Ozcan, Thu Dinh, Andrey Melnikov, Juan Carvajal, Jan Ernst, Sek Chai

We propose a Quantization Guided Training (QGT) method to guide DNN training towards optimized low-bit-precision targets and reach extreme compression levels below 8-bit precision.

Human Detection Quantization

Subtensor Quantization for Mobilenets

no code implementations4 Nov 2020 Thu Dinh, Andrey Melnikov, Vasilios Daskalopoulos, Sek Chai

Quantization for deep neural networks (DNN) have enabled developers to deploy models with less memory and more efficient low-power inference.

Image Classification Quantization

Dynamically Throttleable Neural Networks (TNN)

no code implementations1 Nov 2020 Hengyue Liu, Samyak Parajuli, Jesse Hostetler, Sek Chai, Bir Bhanu

Conditional computation for Deep Neural Networks (DNNs) reduce overall computational load and improve model accuracy by running a subset of the network.

Bit Efficient Quantization for Deep Neural Networks

no code implementations7 Oct 2019 Prateeth Nayak, David Zhang, Sek Chai

Quantization for deep neural networks have afforded models for edge devices that use less on-board memory and enable efficient low-power inference.

Clustering Quantization

Generative Memory for Lifelong Reinforcement Learning

no code implementations22 Feb 2019 Aswin Raghavan, Jesse Hostetler, Sek Chai

Our research is focused on understanding and applying biological memory transfers to new AI systems that can fundamentally improve their performance, throughout their fielded lifetime experience.

reinforcement-learning Reinforcement Learning (RL)

Bootstrapping Deep Neural Networks from Approximate Image Processing Pipelines

no code implementations29 Nov 2018 Kilho Son, Jesse Hostetler, Sek Chai

Complex image processing and computer vision systems often consist of a processing pipeline of functional modules.

Generalized Ternary Connect: End-to-End Learning and Compression of Multiplication-Free Deep Neural Networks

no code implementations12 Nov 2018 Samyak Parajuli, Aswin Raghavan, Sek Chai

The use of deep neural networks in edge computing devices hinges on the balance between accuracy and complexity of computations.

Edge-computing General Classification +1

Power-Grid Controller Anomaly Detection with Enhanced Temporal Deep Learning

no code implementations18 Jun 2018 Zecheng He, Aswin Raghavan, Guangyuan Hu, Sek Chai, Ruby Lee

Specifically, we first train a temporal deep learning model, using only normal HPC readings from legitimate processes that run daily in these power-grid systems, to model the normal behavior of the power-grid controller.

Anomaly Detection

Bit-Regularized Optimization of Neural Nets

no code implementations ICLR 2018 Mohamed Amer, Aswin Raghavan, Graham W. Taylor, Sek Chai

Our key idea is to control the expressive power of the network by dynamically quantizing the range and set of values that the parameters can take.

Translation

BitNet: Bit-Regularized Deep Neural Networks

no code implementations16 Aug 2017 Aswin Raghavan, Mohamed Amer, Sek Chai, Graham Taylor

The parameters of neural networks are usually unconstrained and have a dynamic range dispersed over all real values.

Translation

Low Precision Neural Networks using Subband Decomposition

no code implementations24 Mar 2017 Sek Chai, Aswin Raghavan, David Zhang, Mohamed Amer, Tim Shields

In this paper, we present a unique approach using lower precision weights for more efficient and faster training phase.

Cannot find the paper you are looking for? You can Submit a new open access paper.