Detect-and-Track: Efficient Pose Estimation in Videos

This paper addresses the problem of estimating and tracking human body keypoints in complex, multi-person video. We propose an extremely lightweight yet highly effective approach that builds upon the latest advancements in human detection and video understanding. Our method operates in two-stages: keypoint estimation in frames or short clips, followed by lightweight tracking to generate keypoint predictions linked over the entire video. For frame-level pose estimation we experiment with Mask R-CNN, as well as our own proposed 3D extension of this model, which leverages temporal information over small clips to generate more robust frame predictions. We conduct extensive ablative experiments on the newly released multi-person video pose estimation benchmark, PoseTrack, to validate various design choices of our model. Our approach achieves an accuracy of 55.2% on the validation and 51.8% on the test set using the Multi-Object Tracking Accuracy (MOTA) metric, and achieves state of the art performance on the ICCV 2017 PoseTrack keypoint tracking challenge.

PDF Abstract CVPR 2018 PDF CVPR 2018 Abstract

Results from the Paper


Ranked #8 on Pose Tracking on PoseTrack2017 (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Keypoint Detection COCO test-challenge Girdhar et al. AR 70.2 # 7
ARM 60.7 # 8
Pose Tracking PoseTrack2017 ProTracker MOTA 51.82 # 8
mAP 59.56 # 9

Methods