Sparsifying Binary Networks

11 Jul 2022  ยท  Riccardo Schiavone, Maria A. Zuluaga ยท

Binary neural networks (BNNs) have demonstrated their ability to solve complex tasks with comparable accuracy as full-precision deep neural networks (DNNs), while also reducing computational power and storage requirements and increasing the processing speed. These properties make them an attractive alternative for the development and deployment of DNN-based applications in Internet-of-Things (IoT) devices. Despite the recent improvements, they suffer from a fixed and limited compression factor that may result insufficient for certain devices with very limited resources. In this work, we propose sparse binary neural networks (SBNNs), a novel model and training scheme which introduces sparsity in BNNs and a new quantization function for binarizing the network's weights. The proposed SBNN is able to achieve high compression factors and it reduces the number of operations and parameters at inference time. We also provide tools to assist the SBNN design, while respecting hardware resource constraints. We study the generalization properties of our method for different compression factors through a set of experiments on linear and convolutional networks on three datasets. Our experiments confirm that SBNNs can achieve high compression rates, without compromising generalization, while further reducing the operations of BNNs, making SBNNs a viable option for deploying DNNs in cheap, low-cost, limited-resources IoT devices and sensors.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Sparse Learning and binarization CIFAR-10 SBNN-25% Acc@1 88.45 # 2
Sparse Learning and binarization CIFAR-10 SBNN-5% Acc@1 86.63 # 4
Sparse Learning and binarization CIFAR-10 SBNN-50% Acc@1 88.63 # 1
Sparse Learning and binarization CIFAR-10 SBNN-2% Acc@1 84.73 # 5
Sparse Learning and binarization CIFAR-10 SBNN-10% Acc@1 88.05 # 3
Sparse Learning and binarization CIFAR-10 SBNN-1% Acc@1 78.38 # 6
Sparse Learning and binarization CIFAR-100 SBNN-50% Acc@1 64.65 # 1
Sparse Learning and binarization CIFAR-100 SBNN-25% Acc@1 64.42 # 2
Sparse Learning and binarization CIFAR-100 SBNN-10% Acc@1 62.62 # 3
Sparse Learning and binarization CIFAR-100 SBNN-5% Acc@1 60.62 # 4
Sparse Learning and binarization CIFAR-100 SBNN-2% Acc@1 55.75 # 5
Sparse Learning and binarization CIFAR-100 SBNN-1% Acc@1 50.47 # 6
Sparse Learning and binarization MNIST SBNN-50% Acc@1 98.45 # 3
Sparse Learning and binarization MNIST SBNN-10% Acc@1 98.55 # 2
Sparse Learning and binarization MNIST SBNN-5% Acc@1 98.56 # 1
Sparse Learning and binarization MNIST SBNN-2% Acc@1 98.31 # 4
Sparse Learning and binarization MNIST SBNN-1% Acc@1 98.2 # 5

Methods


No methods listed for this paper. Add relevant methods here