You Only Learn One Query: Learning Unified Human Query for Single-Stage Multi-Person Multi-Task Human-Centric Perception

9 Dec 2023  ·  Sheng Jin, Shuhuai Li, Tong Li, Wentao Liu, Chen Qian, Ping Luo ·

Human-centric perception (e.g. detection, segmentation, pose estimation, and attribute analysis) is a long-standing problem for computer vision. This paper introduces a unified and versatile framework (HQNet) for single-stage multi-person multi-task human-centric perception (HCP). Our approach centers on learning a unified human query representation, denoted as Human Query, which captures intricate instance-level features for individual persons and disentangles complex multi-person scenarios. Although different HCP tasks have been well-studied individually, single-stage multi-task learning of HCP tasks has not been fully exploited in the literature due to the absence of a comprehensive benchmark dataset. To address this gap, we propose COCO-UniHuman benchmark to enable model development and comprehensive evaluation. Experimental results demonstrate the proposed method's state-of-the-art performance among multi-task HCP models and its competitive performance compared to task-specific HCP models. Moreover, our experiments underscore Human Query's adaptability to new HCP tasks, thus demonstrating its robust generalization capability. Codes and data are available at https://github.com/lishuhuai527/COCO-UniHuman.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Human Instance Segmentation OCHuman HQNet (ResNet-50) AP 31.1 # 3
Pose Estimation OCHuman HQNet (ViT-L) Test AP 45.6 # 6
Pose Estimation OCHuman HQNet (ResNet-50) Test AP 40.0 # 10

Methods


No methods listed for this paper. Add relevant methods here