The intrinsic error tolerance of neural network (NN) makes approximate
computing a promising technique to improve the energy efficiency of NN
inference. Conventional approximate computing focuses on balancing the
efficiency-accuracy trade-off for existing pre-trained networks, which can lead
to suboptimal solutions...
In this paper, we propose AxTrain, a hardware-oriented
training framework to facilitate approximate computing for NN inference. Specifically, AxTrain leverages the synergy between two orthogonal
methods---one actively searches for a network parameters distribution with high
error tolerance, and the other passively learns resilient weights by
numerically incorporating the noise distributions of the approximate hardware
in the forward pass during the training phase. Experimental results from
various datasets with near-threshold computing and approximation multiplication
strategies demonstrate AxTrain's ability to obtain resilient neural network
parameters and system energy efficiency improvement.