Paper

Large-margin Learning of Compact Binary Image Encodings

The use of high-dimensional features has become a normal practice in many computer vision applications. The large dimension of these features is a limiting factor upon the number of data points which may be effectively stored and processed, however. We address this problem by developing a novel approach to learning a compact binary encoding, which exploits both pair-wise proximity and class-label information on training data set. Exploiting this extra information allows the development of encodings which, although compact, outperform the original high-dimensional features in terms of final classification or retrieval performance. The method is general, in that it is applicable to both non-parametric and parametric learning methods. This generality means that the embedded features are suitable for a wide variety of computer vision tasks, such as image classification and content-based image retrieval. Experimental results demonstrate that the new compact descriptor achieves an accuracy comparable to, and in some cases better than, the visual descriptor in the original space despite being significantly more compact. Moreover, any convex loss function and convex regularization penalty (e.g., $ \ell_p $ norm with $ p \ge 1 $) can be incorporated into the framework, which provides future flexibility.

Results in Papers With Code
(↓ scroll down to see all results)