Detecting Human-Object Interactions with Object-Guided Cross-Modal Calibrated Semantics

1 Feb 2022  ·  Hangjie Yuan, Mang Wang, Dong Ni, Liangpeng Xu ·

Human-Object Interaction (HOI) detection is an essential task to understand human-centric images from a fine-grained perspective. Although end-to-end HOI detection models thrive, their paradigm of parallel human/object detection and verb class prediction loses two-stage methods' merit: object-guided hierarchy. The object in one HOI triplet gives direct clues to the verb to be predicted. In this paper, we aim to boost end-to-end models with object-guided statistical priors. Specifically, We propose to utilize a Verb Semantic Model (VSM) and use semantic aggregation to profit from this object-guided hierarchy. Similarity KL (SKL) loss is proposed to optimize VSM to align with the HOI dataset's priors. To overcome the static semantic embedding problem, we propose to generate cross-modality-aware visual and semantic features by Cross-Modal Calibration (CMC). The above modules combined composes Object-guided Cross-modal Calibration Network (OCN). Experiments conducted on two popular HOI detection benchmarks demonstrate the significance of incorporating the statistical prior knowledge and produce state-of-the-art performances. More detailed analysis indicates proposed modules serve as a stronger verb predictor and a more superior method of utilizing prior knowledge. The codes are available at \url{https://github.com/JacobYuan7/OCN-HOI-Benchmark}.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Human-Object Interaction Detection HICO-DET OCN (ResNet101) mAP 31.43 # 24
Human-Object Interaction Detection V-COCO OCN (ResNet101) AP(S1) 65.3 # 5
AP(S2) 67.1 # 5
Human-Object Interaction Detection V-COCO OCN (ResNet50) AP(S1) 64.2 # 6
Time Per Frame(ms) 43 # 1
AP(S2) 66.3 # 7

Methods


No methods listed for this paper. Add relevant methods here