Visual Compositional Learning for Human-Object Interaction Detection

ECCV 2020  ·  Zhi Hou, Xiaojiang Peng, Yu Qiao, DaCheng Tao ·

Human-Object interaction (HOI) detection aims to localize and infer relationships between human and objects in an image. It is challenging because an enormous number of possible combinations of objects and verbs types forms a long-tail distribution. We devise a deep Visual Compositional Learning (VCL) framework, which is a simple yet efficient framework to effectively address this problem. VCL first decomposes an HOI representation into object and verb specific features, and then composes new interaction samples in the feature space via stitching the decomposed features. The integration of decomposition and composition enables VCL to share object and verb features among different HOI samples and images, and to generate new interaction samples and new types of HOI, and thus largely alleviates the long-tail distribution problem and benefits low-shot or zero-shot HOI detection. Extensive experiments demonstrate that the proposed VCL can effectively improve the generalization of HOI detection on HICO-DET and V-COCO and outperforms the recent state-of-the-art methods on HICO-DET. Code is available at

PDF Abstract ECCV 2020 PDF ECCV 2020 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Affordance Recognition HICO-DET VCL HICO 43.15 # 3
COCO-Val2017 36.74 # 3
Object365 35.73 # 3
Novel classes 12.05 # 3
Affordance Recognition HICO-DET(Unknown Concepts) VCL COCO-Val2017 28.71 # 3
Obj365 27.58 # 3
HICO 32.76 # 3
Novel Classes 12.05 # 3


No methods listed for this paper. Add relevant methods here