LUTNet: Learning FPGA Configurations for Highly Efficient Neural Network Inference

24 Oct 2019Erwei WangJames J. DavisPeter Y. K. CheungGeorge A. Constantinides

Research has shown that deep neural networks contain significant redundancy, and thus that high classification accuracy can be achieved even when weights and activations are quantized down to binary values. Network binarization on FPGAs greatly increases area efficiency by replacing resource-hungry multipliers with lightweight XNOR gates... (read more)

PDF Abstract

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.