Visual Relationship Detection with Language Priors

31 Jul 2016  ·  Cewu Lu, Ranjay Krishna, Michael Bernstein, Li Fei-Fei ·

Visual relationships capture a wide variety of interactions between pairs of objects in images (e.g. "man riding bicycle" and "man pushing bicycle"). Consequently, the set of possible relationships is extremely large and it is difficult to obtain sufficient training examples for all possible relationships. Because of this limitation, previous work on visual relationship detection has concentrated on predicting only a handful of relationships. Though most relationships are infrequent, their objects (e.g. "man" and "bicycle") and predicates (e.g. "riding" and "pushing") independently occur more frequently. We propose a model that uses this insight to train visual models for objects and predicates individually and later combines them together to predict multiple relationships per image. We improve on prior work by leveraging language priors from semantic word embeddings to finetune the likelihood of a predicted relationship. Our model can scale to predict thousands of types of relationships from a few examples. Additionally, we localize the objects in the predicted relationships as bounding boxes in the image. We further demonstrate that understanding relationships can improve content based image retrieval.

PDF Abstract

Datasets


Introduced in the Paper:

VRD

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Scene Graph Generation VRD VRD Recall@50 18.16 # 2
Visual Relationship Detection VRD Phrase Detection Lu et. al [[Lu et al.2016]] R@100 17.03 # 7
R@50 16.17 # 7
Visual Relationship Detection VRD Predicate Detection Lu et. al [[Lu et al.2016]] R@100 47.87 # 6
R@50 47.87 # 6
Visual Relationship Detection VRD Relationship Detection Lu et. al [[Lu et al.2016]] R@100 14.70 # 8
R@50 13.86 # 8

Methods


No methods listed for this paper. Add relevant methods here