Search Results for author: Seyedramin Rasoulinezhad

Found 3 papers, 1 papers with code

A Block Minifloat Representation for Training Deep Neural Networks

no code implementations ICLR 2021 Sean Fox, Seyedramin Rasoulinezhad, Julian Faraone, David Boland, Philip Leong

Training Deep Neural Networks (DNN) with high efficiency can be difficult to achieve with native floating point representations and commercially available hardware.

NITI: Training Integer Neural Networks Using Integer-only Arithmetic

1 code implementation28 Sep 2020 Maolin Wang, Seyedramin Rasoulinezhad, Philip H. W. Leong, Hayden K. -H. So

While integer arithmetic has been widely adopted for improved performance in deep quantized neural network inference, training remains a task primarily executed using floating point arithmetic.

MajorityNets: BNNs Utilising Approximate Popcount for Improved Efficiency

no code implementations27 Feb 2020 Seyedramin Rasoulinezhad, Sean Fox, Hao Zhou, Lingli Wang, David Boland, Philip H. W. Leong

Binarized neural networks (BNNs) have shown exciting potential for utilising neural networks in embedded implementations where area, energy and latency constraints are paramount.

Cannot find the paper you are looking for? You can Submit a new open access paper.