Paper

AID: Pushing the Performance Boundary of Human Pose Estimation with Information Dropping Augmentation

Both appearance cue and constraint cue are vital for human pose estimation. However, there is a tendency in most existing works to overfitting the former and overlook the latter. In this paper, we propose Augmentation by Information Dropping (AID) to verify and tackle this dilemma. Alone with AID as a prerequisite for effectively exploiting its potential, we propose customized training schedules, which are designed by analyzing the pattern of loss and performance in training process from the perspective of information supplying. In experiments, as a model-agnostic approach, AID promotes various state-of-the-art methods in both bottom-up and top-down paradigms with different input sizes, frameworks, backbones, training and testing sets. On popular COCO human pose estimation test set, AID consistently boosts the performance of different configurations by around 0.6 AP in top-down paradigm and up to 1.5 AP in bottom-up paradigm. On more challenging CrowdPose dataset, the improvement is more than 1.5 AP. As AID successfully pushes the performance boundary of human pose estimation problem by considerable margin and sets a new state-of-the-art, we hope AID to be a regular configuration for training human pose estimators. The source code will be publicly available for further research.

Results in Papers With Code
(↓ scroll down to see all results)