MS-TCN: Multi-Stage Temporal Convolutional Network for Action Segmentation

CVPR 2019  ·  Yazan Abu Farha, Juergen Gall ·

Temporally locating and classifying action segments in long untrimmed videos is of particular interest to many applications like surveillance and robotics. While traditional approaches follow a two-step pipeline, by generating frame-wise probabilities and then feeding them to high-level temporal models, recent approaches use temporal convolutions to directly classify the video frames. In this paper, we introduce a multi-stage architecture for the temporal action segmentation task. Each stage features a set of dilated temporal convolutions to generate an initial prediction that is refined by the next one. This architecture is trained using a combination of a classification loss and a proposed smoothing loss that penalizes over-segmentation errors. Extensive evaluation shows the effectiveness of the proposed model in capturing long-range dependencies and recognizing action segments. Our model achieves state-of-the-art results on three challenging datasets: 50Salads, Georgia Tech Egocentric Activities (GTEA), and the Breakfast dataset.

PDF Abstract CVPR 2019 PDF CVPR 2019 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Action Segmentation 50 Salads MS-TCN F1@10% 76.3 # 25
Edit 67.9 # 25
Acc 80.7 # 23
F1@25% 74.0 # 25
F1@50% 64.5 # 25
Action Segmentation Breakfast MS-TCN (IDT) F1@10% 58.2 # 26
F1@50% 40.8 # 26
Acc 65.1 # 27
Edit 61.4 # 27
F1@25% 52.9 # 26
Action Segmentation Breakfast MS-TCN (I3D) F1@10% 52.6 # 27
F1@50% 37.9 # 27
Acc 66.3 # 26
Edit 61.7 # 26
F1@25% 48.1 # 27
Action Segmentation GTEA MS-TCN F1@10% 87.5 # 22
F1@50% 74.6 # 20
Acc 79.2 # 16
Edit 81.4 # 22
F1@25% 85.4 # 22

Methods


No methods listed for this paper. Add relevant methods here