UntrimmedNets for Weakly Supervised Action Recognition and Detection

CVPR 2017  ·  Limin Wang, Yuanjun Xiong, Dahua Lin, Luc van Gool ·

Current action recognition methods heavily rely on trimmed videos for model training. However, it is expensive and time-consuming to acquire a large-scale trimmed video dataset. This paper presents a new weakly supervised architecture, called UntrimmedNet, which is able to directly learn action recognition models from untrimmed videos without the requirement of temporal annotations of action instances. Our UntrimmedNet couples two important components, the classification module and the selection module, to learn the action models and reason about the temporal duration of action instances, respectively. These two components are implemented with feed-forward networks, and UntrimmedNet is therefore an end-to-end trainable architecture. We exploit the learned models for action recognition (WSR) and detection (WSD) on the untrimmed video datasets of THUMOS14 and ActivityNet. Although our UntrimmedNet only employs weak supervision, our method achieves performance superior or comparable to that of those strongly supervised approaches on these two datasets.

PDF Abstract CVPR 2017 PDF CVPR 2017 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Action Classification ActivityNet-1.2 UntrimmedNets mAP 87.7 # 3
Action Classification THUMOS’14 UntrimmedNets mAP 82.2 # 3
Weakly Supervised Action Localization THUMOS 2014 UntrimmedNets mAP@0.5 13.7 # 21
mAP@0.1:0.7 - # 15


No methods listed for this paper. Add relevant methods here