Polarity Loss for Zero-shot Object Detection

22 Nov 2018  ·  Shafin Rahman, Salman Khan, Nick Barnes ·

Conventional object detection models require large amounts of training data. In comparison, humans can recognize previously unseen objects by merely knowing their semantic description. To mimic similar behaviour, zero-shot object detection aims to recognize and localize 'unseen' object instances by using only their semantic information. The model is first trained to learn the relationships between visual and semantic domains for seen objects, later transferring the acquired knowledge to totally unseen objects. This setting gives rise to the need for correct alignment between visual and semantic concepts, so that the unseen objects can be identified using only their semantic attributes. In this paper, we propose a novel loss function called 'Polarity loss', that promotes correct visual-semantic alignment for an improved zero-shot object detection. On one hand, it refines the noisy semantic embeddings via metric learning on a 'Semantic vocabulary' of related concepts to establish a better synergy between visual and semantic domains. On the other hand, it explicitly maximizes the gap between positive and negative predictions to achieve better discrimination between seen, unseen and background objects. Our approach is inspired by embodiment theories in cognitive science, that claim human semantic understanding to be grounded in past experiences (seen objects), related linguistic concepts (word vocabulary) and visual perception (seen/unseen object images). We conduct extensive evaluations on MS-COCO and Pascal VOC datasets, showing significant improvements over state of the art.

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Generalized Zero-Shot Object Detection MS-COCO PL HM(mAP) 18.18 # 8
HM(Recall) 36.76 # 8
Zero-Shot Object Detection MS-COCO ZSD-Polarity Loss mAP 12.62 # 8
Recall 43.56 # 8
Zero-Shot Object Detection PASCAL VOC'07 PL mAP 62.10 # 5

Methods


No methods listed for this paper. Add relevant methods here