|Trend||Dataset||Best Method||Paper title||Paper||Code||Compare|
Binary Neural Networks (BNNs) show promising progress in reducing computational and memory costs but suffer from substantial accuracy degradation compared to their real-valued counterparts on large-scale datasets, e. g., ImageNet.
In this work, a deep learning-based quantization scheme for log-likelihood ratio (L-value) storage is introduced.
To the end, when the model is trained, a sequence of binary codes can be generated and the code length can be easily controlled by adjusting the number of recurrent iterations.
In this work, we propose a deep progressive quantization (DPQ) model, as an alternative to PQ, for large scale image retrieval.
The deep layers of modern neural networks extract a rather rich set of features as an input propagates through the network.
As a result, PSP maintains prediction performance, creates a substantial amount of sparsity that is structured and, thus, easy and efficient to map to a variety of massively parallel processors, which are mandatory for utmost compute power and energy efficiency.
Specifically, any convolution layer of the CNN is easily replaced by two successive convolution layers: the first is a set of fixed filters (that represent the knowledge space of the entire layer and do not change), which is followed by a layer of one-dimensional filters (that represent the learned knowledge in this space).
We show results that are within 1. 6% of the reported, non-quantized performance on MobileNet using only 40 entries in our table.