XNOR-Net++: Improved Binary Neural Networks

30 Sep 2019  ·  Adrian Bulat, Georgios Tzimiropoulos ·

This paper proposes an improved training algorithm for binary neural networks in which both weights and activations are binary numbers. A key but fairly overlooked feature of the current state-of-the-art method of XNOR-Net is the use of analytically calculated real-valued scaling factors for re-weighting the output of binary convolutions. We argue that analytic calculation of these factors is sub-optimal. Instead, in this work, we make the following contributions: (a) we propose to fuse the activation and weight scaling factors into a single one that is learned discriminatively via backpropagation. (b) More importantly, we explore several ways of constructing the shape of the scale factors while keeping the computational budget fixed. (c) We empirically measure the accuracy of our approximations and show that they are significantly more accurate than the analytically calculated one. (d) We show that our approach significantly outperforms XNOR-Net within the same computational budget when tested on the challenging task of ImageNet classification, offering up to 6\% accuracy gain.

PDF Abstract


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Classification with Binary Neural Network ImageNet ResNet-18 Top-1 Accuracy 57.1 # 4
Top-5 Accuracy 79.9 # 1


No methods listed for this paper. Add relevant methods here