Detailed 2D-3D Joint Representation for Human-Object Interaction

Human-Object Interaction (HOI) detection lies at the core of action understanding. Besides 2D information such as human/object appearance and locations, 3D pose is also usually utilized in HOI learning since its view-independence. However, rough 3D body joints just carry sparse body information and are not sufficient to understand complex interactions. Thus, we need detailed 3D body shape to go further. Meanwhile, the interacted object in 3D is also not fully studied in HOI learning. In light of these, we propose a detailed 2D-3D joint representation learning method. First, we utilize the single-view human body capture method to obtain detailed 3D body, face and hand shapes. Next, we estimate the 3D object location and size with reference to the 2D human-object spatial configuration and object category priors. Finally, a joint learning framework and cross-modal consistency tasks are proposed to learn the joint HOI representation. To better evaluate the 2D ambiguity processing capacity of models, we propose a new benchmark named Ambiguous-HOI consisting of hard ambiguous images. Extensive experiments in large-scale HOI benchmark and Ambiguous-HOI show impressive effectiveness of our method. Code and data are available at

PDF Abstract CVPR 2020 PDF CVPR 2020 Abstract


Introduced in the Paper:


Used in the Paper:

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Human-Object Interaction Detection Ambiguious-HOI DJ-RN mAP 10.37 # 1
Human-Object Interaction Detection HICO-DET DJ-RN mAP 21.34 # 43


No methods listed for this paper. Add relevant methods here