4 code implementations • 1 Dec 2016 • Yaman Umuroglu, Nicholas J. Fraser, Giulio Gambardella, Michaela Blott, Philip Leong, Magnus Jahre, Kees Vissers
Research has shown that convolutional neural networks contain significant redundancy, and high classification accuracy can be obtained even when weights and activations are reduced from floating point to binary values.
no code implementations • 12 Jan 2017 • Nicholas J. Fraser, Yaman Umuroglu, Giulio Gambardella, Michaela Blott, Philip Leong, Magnus Jahre, Kees Vissers
Binarized neural networks (BNNs) are gaining interest in the deep learning community due to their significantly lower computational and memory cost.
no code implementations • 15 Dec 2013 • Richard Davis, Sanjay Chawla, Philip Leong
In this article we propose feature graph architectures (FGA), which are deep learning systems employing a structured initialisation and training method based on a feature graph which facilitates improved generalisation performance compared with a standard shallow architecture.
no code implementations • ICLR 2021 • Sean Fox, Seyedramin Rasoulinezhad, Julian Faraone, David Boland, Philip Leong
Training Deep Neural Networks (DNN) with high efficiency can be difficult to achieve with native floating point representations and commercially available hardware.
no code implementations • 25 Sep 2019 • Julian Faraone, Philip Leong
We present a novel technique, Monte Carlo Deep Neural Network Arithmetic (MCA), for determining the sensitivity of Deep Neural Networks to quantization in floating point arithmetic. We do this by applying Monte Carlo Arithmetic to the inference computation and analyzing the relative standard deviation of the neural network loss.