no code implementations • 19 Nov 2019 • Julian Faraone, Martin Kumm, Martin Hardieck, Peter Zipf, Xueyuan Liu, David Boland, Philip H. W. Leong
Low-precision arithmetic operations to accelerate deep-learning applications on field-programmable gate arrays (FPGAs) have been studied extensively, because they offer the potential to save silicon area or increase throughput.
no code implementations • 27 Feb 2020 • Seyedramin Rasoulinezhad, Sean Fox, Hao Zhou, Lingli Wang, David Boland, Philip H. W. Leong
Binarized neural networks (BNNs) have shown exciting potential for utilising neural networks in embedded implementations where area, energy and latency constraints are paramount.
no code implementations • ICLR 2021 • Sean Fox, Seyedramin Rasoulinezhad, Julian Faraone, David Boland, Philip Leong
Training Deep Neural Networks (DNN) with high efficiency can be difficult to achieve with native floating point representations and commercially available hardware.
2 code implementations • 9 Sep 2019 • Stephen Tridgell, Martin Kumm, Martin Hardieck, David Boland, Duncan Moss, Peter Zipf, Philip H. W. Leong
The computational complexity of neural networks for large scale or real-time applications necessitates hardware acceleration.