Finding Action Tubes

CVPR 2015  ·  Georgia Gkioxari, Jitendra Malik ·

We address the problem of action detection in videos. Driven by the latest progress in object detection from 2D images, we build action models using rich feature hierarchies derived from shape and kinematic cues. We incorporate appearance and motion in two ways. First, starting from image region proposals we select those that are motion salient and thus are more likely to contain the action. This leads to a significant reduction in the number of regions being processed and allows for faster computations. Second, we extract spatio-temporal feature representations to build strong classifiers using Convolutional Neural Networks. We link our predictions to produce detections consistent in time, which we call action tubes. We show that our approach outperforms other techniques in the task of action detection.

PDF Abstract CVPR 2015 PDF CVPR 2015 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Action Detection J-HMDB Action Tubes Video-mAP 0.5 53.3 # 16
Frame-mAP 0.5 36.2 # 12
Skeleton Based Action Recognition J-HMDB Action Tubes Accuracy (RGB+pose) 62.5 # 10
Action Detection UCF Sports Action Tubes Video-mAP 0.5 75.8 # 6
Frame-mAP 0.5 68.1 # 4

Methods


No methods listed for this paper. Add relevant methods here