Paper

Learning Mid-Level Features and Modeling Neuron Selectivity for Image Classification

We now know that mid-level features can greatly enhance the performance of image learning, but how to automatically learn the image features efficiently and in an unsupervised manner is still an open question. In this paper, we present a very efficient mid-level feature learning approach (MidFea), which only involves simple operations such as $k$-means clustering, convolution, pooling, vector quantization and random projection. We explain why this simple method generates the desired features, and argue that there is no need to spend much time in learning low-level feature extractors. Furthermore, to boost the performance, we propose to model the neuron selectivity (NS) principle by building an additional layer over the mid-level features before feeding the features into the classifier. We show that the NS-layer learns category-specific neurons with both bottom-up inference and top-down analysis, and thus supports fast inference for a query image. We run extensive experiments on several public databases to demonstrate that our approach can achieve state-of-the-art performances for face recognition, gender classification, age estimation and object categorization. In particular, we demonstrate that our approach is more than an order of magnitude faster than some recently proposed sparse coding based methods.

Results in Papers With Code
(↓ scroll down to see all results)