Learning Robust Visual-Semantic Embeddings

Many of the existing methods for learning joint embedding of images and text use only supervised information from paired images and its textual attributes. Taking advantage of the recent success of unsupervised learning in deep neural networks, we propose an end-to-end learning framework that is able to extract more robust multi-modal representations across domains. The proposed method combines representation learning models (i.e., auto-encoders) together with cross-domain learning criteria (i.e., Maximum Mean Discrepancy loss) to learn joint embeddings for semantic and visual features. A novel technique of unsupervised-data adaptation inference is introduced to construct more comprehensive embeddings for both labeled and unlabeled data. We evaluate our method on Animals with Attributes and Caltech-UCSD Birds 200-2011 dataset with a wide range of applications, including zero and few-shot image recognition and retrieval, from inductive to transductive settings. Empirically, we show that our framework improves over the current state of the art on many of the considered tasks.

PDF Abstract ICCV 2017 PDF ICCV 2017 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Generalized Few-Shot Learning AwA2 REVISE Per-Class Accuracy (1-shot) 56.1 # 6
Per-Class Accuracy (2-shots) 60.3 # 6
Per-Class Accuracy (5-shots) 64.1 # 6
Per-Class Accuracy (10-shots) 67.8 # 6
Generalized Few-Shot Learning CUB REVISE Per-Class Accuracy (1-shot) 36.3 # 5
Per-Class Accuracy (2-shots) 41.1 # 5
Per-Class Accuracy (5-shots) 44.6 # 5
Per-Class Accuracy (10-shots) 50.9 # 5
Generalized Few-Shot Learning SUN REVISE Per-Class Accuracy (1-shot) 27.4 # 5
Per-Class Accuracy (2-shots) 33.4 # 5
Per-Class Accuracy (5-shots) 37.4 # 5
Per-Class Accuracy (10-shots) 40.8 # 5

Methods


No methods listed for this paper. Add relevant methods here