Multi-granularity Generator for Temporal Action Proposal

CVPR 2019  ·  Yuan Liu, Lin Ma, Yifeng Zhang, Wei Liu, Shih-Fu Chang ·

Temporal action proposal generation is an important task, aiming to localize the video segments containing human actions in an untrimmed video. In this paper, we propose a multi-granularity generator (MGG) to perform the temporal action proposal from different granularity perspectives, relying on the video visual features equipped with the position embedding information... First, we propose to use a bilinear matching model to exploit the rich local information within the video sequence. Afterwards, two components, namely segment proposal producer (SPP) and frame actionness producer (FAP), are combined to perform the task of temporal action proposal at two distinct granularities. SPP considers the whole video in the form of feature pyramid and generates segment proposals from one coarse perspective, while FAP carries out a finer actionness evaluation for each video frame. Our proposed MGG can be trained in an end-to-end fashion. By temporally adjusting the segment proposals with fine-grained frame actionness information, MGG achieves the superior performance over state-of-the-art methods on the public THUMOS-14 and ActivityNet-1.3 datasets. Moreover, we employ existing action classifiers to perform the classification of the proposals generated by MGG, leading to significant improvements compared against the competing methods for the video detection task. read more

PDF Abstract CVPR 2019 PDF CVPR 2019 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Temporal Action Proposal Generation ActivityNet-1.3 MGG AUC (val) 66.43 # 4
AR@100 74.54 # 4
Action Recognition THUMOS’14 MGG UNet mAP@0.3 53.9 # 2
mAP@0.4 46.8 # 2
mAP@0.5 37.4 # 2

Methods


No methods listed for this paper. Add relevant methods here