Search Results for author: Jesse Beu

Found 9 papers, 0 papers with code

Efficient Winograd or Cook-Toom Convolution Kernel Implementation on Widely Used Mobile CPUs

no code implementations4 Mar 2019 Partha Maji, Andrew Mundy, Ganesh Dasika, Jesse Beu, Matthew Mattina, Robert Mullins

The Winograd or Cook-Toom class of algorithms help to reduce the overall compute complexity of many modern deep convolutional neural networks (CNNs).

Compressing RNNs for IoT devices by 15-38x using Kronecker Products

no code implementations7 Jun 2019 Urmish Thakker, Jesse Beu, Dibakar Gope, Chu Zhou, Igor Fedorov, Ganesh Dasika, Matthew Mattina

Recurrent Neural Networks (RNN) can be difficult to deploy on resource constrained devices due to their size. As a result, there is a need for compression techniques that can significantly compress RNNs without negatively impacting task accuracy.

Run-Time Efficient RNN Compression for Inference on Edge Devices

no code implementations12 Jun 2019 Urmish Thakker, Jesse Beu, Dibakar Gope, Ganesh Dasika, Matthew Mattina

Recurrent neural networks can be large and compute-intensive, yet many applications that benefit from RNNs run on small devices with very limited compute and storage capabilities while still having run-time constraints.

Edge-computing

Pushing the limits of RNN Compression

no code implementations4 Oct 2019 Urmish Thakker, Igor Fedorov, Jesse Beu, Dibakar Gope, Chu Zhou, Ganesh Dasika, Matthew Mattina

This paper introduces a method to compress RNNs for resource constrained environments using Kronecker product (KP).

Ternary MobileNets via Per-Layer Hybrid Filter Banks

no code implementations4 Nov 2019 Dibakar Gope, Jesse Beu, Urmish Thakker, Matthew Mattina

Using this proposed quantization method, we quantized a substantial portion of weight filters of MobileNets to ternary values resulting in 27. 98% savings in energy, and a 51. 07% reduction in the model size, while achieving comparable accuracy and no degradation in throughput on specialized hardware in comparison to the baseline full-precision MobileNets.

Quantization

Compressing Language Models using Doped Kronecker Products

no code implementations24 Jan 2020 Urmish Thakker, Paul N. Whatmough, Zhi-Gang Liu, Matthew Mattina, Jesse Beu

Kronecker Products (KP) have been used to compress IoT RNN Applications by 15-38x compression factors, achieving better results than traditional compression methods.

Language Modelling Large Language Model

High Throughput Matrix-Matrix Multiplication between Asymmetric Bit-Width Operands

no code implementations3 Aug 2020 Dibakar Gope, Jesse Beu, Matthew Mattina

While existing SIMD matrix multiplication instructions for symmetric bit-width operands can support operands of mixed precision by zero- or sign-extending the narrow operand to match the size of the other operands, they cannot exploit the benefit of narrow bit-width of one of the operands.

BIG-bench Machine Learning Vocal Bursts Intensity Prediction

Rank and run-time aware compression of NLP Applications

no code implementations EMNLP (sustainlp) 2020 Urmish Thakker, Jesse Beu, Dibakar Gope, Ganesh Dasika, Matthew Mattina

We evaluate the impact of this technique on 5 NLP benchmarks across multiple tasks (Translation, Intent Detection, Language Modeling) and show that for similar accuracy values and compression factors, HMF can achieve more than 2. 32x faster inference run-time than pruning and 16. 77% better accuracy than LMF.

Intent Detection Language Modelling +1

Doping: A technique for efficient compression of LSTM models using sparse structured additive matrices

no code implementations14 Feb 2021 Urmish Thakker, Paul N. Whatmough, ZhiGang Liu, Matthew Mattina, Jesse Beu

Additionally, results with doped kronecker product matrices demonstrate state-of-the-art accuracy at large compression factors (10 - 25x) across 4 natural language processing applications with minor loss in accuracy.

Cannot find the paper you are looking for? You can Submit a new open access paper.