Search Results for author: David Boland

Found 4 papers, 1 papers with code

A Block Minifloat Representation for Training Deep Neural Networks

no code implementations ICLR 2021 Sean Fox, Seyedramin Rasoulinezhad, Julian Faraone, David Boland, Philip Leong

Training Deep Neural Networks (DNN) with high efficiency can be difficult to achieve with native floating point representations and commercially available hardware.

MajorityNets: BNNs Utilising Approximate Popcount for Improved Efficiency

no code implementations27 Feb 2020 Seyedramin Rasoulinezhad, Sean Fox, Hao Zhou, Lingli Wang, David Boland, Philip H. W. Leong

Binarized neural networks (BNNs) have shown exciting potential for utilising neural networks in embedded implementations where area, energy and latency constraints are paramount.

AddNet: Deep Neural Networks Using FPGA-Optimized Multipliers

no code implementations19 Nov 2019 Julian Faraone, Martin Kumm, Martin Hardieck, Peter Zipf, Xueyuan Liu, David Boland, Philip H. W. Leong

Low-precision arithmetic operations to accelerate deep-learning applications on field-programmable gate arrays (FPGAs) have been studied extensively, because they offer the potential to save silicon area or increase throughput.

Quantization

Unrolling Ternary Neural Networks

2 code implementations9 Sep 2019 Stephen Tridgell, Martin Kumm, Martin Hardieck, David Boland, Duncan Moss, Peter Zipf, Philip H. W. Leong

The computational complexity of neural networks for large scale or real-time applications necessitates hardware acceleration.

Cannot find the paper you are looking for? You can Submit a new open access paper.