Search Results for author: Erwei Wang

Found 7 papers, 5 papers with code

Logic Shrinkage: Learned FPGA Netlist Sparsity for Efficient Neural Network Inference

1 code implementation4 Dec 2021 Erwei Wang, James J. Davis, Georgios-Ilias Stavrou, Peter Y. K. Cheung, George A. Constantinides, Mohamed S. Abdelfattah

To address these issues, we propose logic shrinkage, a fine-grained netlist pruning methodology enabling K to be automatically learned for every LUT in a neural network targeted for FPGA inference.

Efficient Neural Network

Enabling Binary Neural Network Training on the Edge

2 code implementations8 Feb 2021 Erwei Wang, James J. Davis, Daniele Moro, Piotr Zielinski, Jia Jie Lim, Claudionor Coelho, Satrajit Chatterjee, Peter Y. K. Cheung, George A. Constantinides

The ever-growing computational demands of increasingly complex machine learning models frequently necessitate the use of powerful cloud-based infrastructure for their training.

Quantization

LUTNet: Learning FPGA Configurations for Highly Efficient Neural Network Inference

2 code implementations24 Oct 2019 Erwei Wang, James J. Davis, Peter Y. K. Cheung, George A. Constantinides

Research has shown that deep neural networks contain significant redundancy, and thus that high classification accuracy can be achieved even when weights and activations are quantized down to binary values.

Binarization Efficient Neural Network

LUTNet: Rethinking Inference in FPGA Soft Logic

2 code implementations1 Apr 2019 Erwei Wang, James J. Davis, Peter Y. K. Cheung, George A. Constantinides

Research has shown that deep neural networks contain significant redundancy, and that high classification accuracies can be achieved even when weights and activations are quantised down to binary values.

Cannot find the paper you are looking for? You can Submit a new open access paper.