no code implementations • 28 Mar 2023 • Teng-Hui Huang, Thilini Dahanayaka, Kanchana Thilakarathna, Philip H. W. Leong, Hesham El Gamal
Our information-theoretic approach can be extended to supervised and semi-supervised settings with straightforward derivations.
1 code implementation • 28 Sep 2020 • Maolin Wang, Seyedramin Rasoulinezhad, Philip H. W. Leong, Hayden K. -H. So
While integer arithmetic has been widely adopted for improved performance in deep quantized neural network inference, training remains a task primarily executed using floating point arithmetic.
no code implementations • 27 Feb 2020 • Seyedramin Rasoulinezhad, Sean Fox, Hao Zhou, Lingli Wang, David Boland, Philip H. W. Leong
Binarized neural networks (BNNs) have shown exciting potential for utilising neural networks in embedded implementations where area, energy and latency constraints are paramount.
no code implementations • 19 Nov 2019 • Julian Faraone, Martin Kumm, Martin Hardieck, Peter Zipf, Xueyuan Liu, David Boland, Philip H. W. Leong
Low-precision arithmetic operations to accelerate deep-learning applications on field-programmable gate arrays (FPGAs) have been studied extensively, because they offer the potential to save silicon area or increase throughput.
2 code implementations • 9 Sep 2019 • Stephen Tridgell, Martin Kumm, Martin Hardieck, David Boland, Duncan Moss, Peter Zipf, Philip H. W. Leong
The computational complexity of neural networks for large scale or real-time applications necessitates hardware acceleration.
1 code implementation • CVPR 2018 • Julian Faraone, Nicholas Fraser, Michaela Blott, Philip H. W. Leong
An efficient way to reduce this complexity is to quantize the weight parameters and/or activations during training by approximating their distributions with a limited entry codebook.
no code implementations • 19 Sep 2017 • Julian Faraone, Nicholas Fraser, Giulio Gambardella, Michaela Blott, Philip H. W. Leong
A low precision deep neural network training technique for producing sparse, ternary neural networks is presented.