Table-Based Neural Units: Fully Quantizing Networks for Multiply-Free Inference

11 Jun 2019Michele CovellDavid MarwoodShumeet BalujaNick Johnston

In this work, we propose to quantize all parts of standard classification networks and replace the activation-weight--multiply step with a simple table-based lookup. This approach results in networks that are free of floating-point operations and free of multiplications, suitable for direct FPGA and ASIC implementations... (read more)

PDF Abstract

Evaluation results from the paper

  Submit results from this paper to get state-of-the-art GitHub badges and help community compare results to other papers.