Power and Accuracy of Multi-Layer Perceptrons (MLPs) under Reduced-voltage FPGA BRAMs Operation

10 May 2020  ·  Behzad Salami, Osman Unsal, Adrian Cristal ·

In this paper, we exploit the aggressive supply voltage underscaling technique in Block RAMs (BRAMs) of Field Programmable Gate Arrays (FPGAs) to improve the energy efficiency of Multi-Layer Perceptrons (MLPs). Additionally, we evaluate and improve the resilience of this accelerator. Through experiments on several representative FPGA fabrics, we observe that until a minimum safe voltage level, i.e., Vmin the MLP accuracy is not affected. This safe region involves a large voltage guardband. Also, it involves a narrower voltage region where faults start to appear in memories due to the increased circuit delay, but these faults are masked by MLP, and thus, its accuracy is not affected. However, further undervolting causes significant accuracy loss as a result of the fast-increasing high fault rates. Based on the characterization of these undervolting faults, we propose fault mitigation techniques that can effectively improve the resilience behavior of such accelerator. Our evaluation is based on four FPGA platforms. On average, we achieve >90% energy saving with a negligible accuracy loss of up to 0.1%.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here