Search Results for author: Nicholas J. Fraser

Found 5 papers, 2 papers with code

Ps and Qs: Quantization-aware pruning for efficient low latency neural network inference

1 code implementation22 Feb 2021 Benjamin Hawks, Javier Duarte, Nicholas J. Fraser, Alessandro Pappalardo, Nhan Tran, Yaman Umuroglu

We study various configurations of pruning during quantization-aware training, which we term quantization-aware pruning, and the effect of techniques like regularization, batch normalization, and different pruning schemes on performance, computational complexity, and information content metrics.

Bayesian Optimization Computational Efficiency +2

FAT: Training Neural Networks for Reliable Inference Under Hardware Faults

no code implementations11 Nov 2020 Ussama Zahid, Giulio Gambardella, Nicholas J. Fraser, Michaela Blott, Kees Vissers

Our experiments show that by injecting faults in the convolutional layers during training, highly accurate convolutional neural networks (CNNs) can be trained which exhibits much better error tolerance compared to the original.

Image Classification speech-recognition +1

LogicNets: Co-Designed Neural Networks and Circuits for Extreme-Throughput Applications

no code implementations6 Apr 2020 Yaman Umuroglu, Yash Akhauri, Nicholas J. Fraser, Michaela Blott

Deployment of deep neural networks for applications that require very high throughput or extremely low latency is a severe computational challenge, further exacerbated by inefficiencies in mapping the computation to hardware.

Network Intrusion Detection Quantization

Scaling Binarized Neural Networks on Reconfigurable Logic

no code implementations12 Jan 2017 Nicholas J. Fraser, Yaman Umuroglu, Giulio Gambardella, Michaela Blott, Philip Leong, Magnus Jahre, Kees Vissers

Binarized neural networks (BNNs) are gaining interest in the deep learning community due to their significantly lower computational and memory cost.

General Classification

FINN: A Framework for Fast, Scalable Binarized Neural Network Inference

4 code implementations1 Dec 2016 Yaman Umuroglu, Nicholas J. Fraser, Giulio Gambardella, Michaela Blott, Philip Leong, Magnus Jahre, Kees Vissers

Research has shown that convolutional neural networks contain significant redundancy, and high classification accuracy can be obtained even when weights and activations are reduced from floating point to binary values.

General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.