Label-Embedding for Image Classification

30 Mar 2015  ·  Zeynep Akata, Florent Perronnin, Zaid Harchaoui, Cordelia Schmid ·

Attributes act as intermediate representations that enable parameter sharing between classes, a must when training data is scarce. We propose to view attribute-based image classification as a label-embedding problem: each class is embedded in the space of attribute vectors. We introduce a function that measures the compatibility between an image and a label embedding. The parameters of this function are learned on a training set of labeled samples to ensure that, given an image, the correct classes rank higher than the incorrect ones. Results on the Animals With Attributes and Caltech-UCSD-Birds datasets show that the proposed framework outperforms the standard Direct Attribute Prediction baseline in a zero-shot learning scenario. Label embedding enjoys a built-in ability to leverage alternative sources of information instead of or in addition to attributes, such as e.g. class hierarchies or textual descriptions. Moreover, label embedding encompasses the whole range of learning settings from zero-shot learning to regular learning with a large number of labeled examples.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Zero-Shot Action Recognition Kinetics ALE Top-1 Accuracy 23.4 # 15
Top-5 Accuracy 50.3 # 11
Multi-label zero-shot learning Open Images V4 LabelEM MAP 40.5 # 7

Methods


No methods listed for this paper. Add relevant methods here