Search Results for author: Philip H. W. Leong

Found 7 papers, 3 papers with code

The Wyner Variational Autoencoder for Unsupervised Multi-Layer Wireless Fingerprinting

no code implementations28 Mar 2023 Teng-Hui Huang, Thilini Dahanayaka, Kanchana Thilakarathna, Philip H. W. Leong, Hesham El Gamal

Our information-theoretic approach can be extended to supervised and semi-supervised settings with straightforward derivations.

Variational Inference

NITI: Training Integer Neural Networks Using Integer-only Arithmetic

1 code implementation28 Sep 2020 Maolin Wang, Seyedramin Rasoulinezhad, Philip H. W. Leong, Hayden K. -H. So

While integer arithmetic has been widely adopted for improved performance in deep quantized neural network inference, training remains a task primarily executed using floating point arithmetic.

MajorityNets: BNNs Utilising Approximate Popcount for Improved Efficiency

no code implementations27 Feb 2020 Seyedramin Rasoulinezhad, Sean Fox, Hao Zhou, Lingli Wang, David Boland, Philip H. W. Leong

Binarized neural networks (BNNs) have shown exciting potential for utilising neural networks in embedded implementations where area, energy and latency constraints are paramount.

AddNet: Deep Neural Networks Using FPGA-Optimized Multipliers

no code implementations19 Nov 2019 Julian Faraone, Martin Kumm, Martin Hardieck, Peter Zipf, Xueyuan Liu, David Boland, Philip H. W. Leong

Low-precision arithmetic operations to accelerate deep-learning applications on field-programmable gate arrays (FPGAs) have been studied extensively, because they offer the potential to save silicon area or increase throughput.

Quantization

Unrolling Ternary Neural Networks

2 code implementations9 Sep 2019 Stephen Tridgell, Martin Kumm, Martin Hardieck, David Boland, Duncan Moss, Peter Zipf, Philip H. W. Leong

The computational complexity of neural networks for large scale or real-time applications necessitates hardware acceleration.

Rolling Shutter Correction

SYQ: Learning Symmetric Quantization For Efficient Deep Neural Networks

1 code implementation CVPR 2018 Julian Faraone, Nicholas Fraser, Michaela Blott, Philip H. W. Leong

An efficient way to reduce this complexity is to quantize the weight parameters and/or activations during training by approximating their distributions with a limited entry codebook.

Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.