Search Results for author: Peter Y. K. Cheung

Found 5 papers, 3 papers with code

Enabling Binary Neural Network Training on the Edge

1 code implementation8 Feb 2021 Erwei Wang, James J. Davis, Daniele Moro, Piotr Zielinski, Claudionor Coelho, Satrajit Chatterjee, Peter Y. K. Cheung, George A. Constantinides

The ever-growing computational demands of increasingly complex machine learning models frequently necessitate the use of powerful cloud-based infrastructure for their training.

Quantization

LUTNet: Learning FPGA Configurations for Highly Efficient Neural Network Inference

1 code implementation24 Oct 2019 Erwei Wang, James J. Davis, Peter Y. K. Cheung, George A. Constantinides

Research has shown that deep neural networks contain significant redundancy, and thus that high classification accuracy can be achieved even when weights and activations are quantized down to binary values.

Binarization

LUTNet: Rethinking Inference in FPGA Soft Logic

1 code implementation1 Apr 2019 Erwei Wang, James J. Davis, Peter Y. K. Cheung, George A. Constantinides

Research has shown that deep neural networks contain significant redundancy, and that high classification accuracies can be achieved even when weights and activations are quantised down to binary values.

Cannot find the paper you are looking for? You can Submit a new open access paper.