Temporal Action Segmentation from Timestamp Supervision

CVPR 2021  ·  Zhe Li, Yazan Abu Farha, Juergen Gall ·

Temporal action segmentation approaches have been very successful recently. However, annotating videos with frame-wise labels to train such models is very expensive and time consuming. While weakly supervised methods trained using only ordered action lists require less annotation effort, the performance is still worse than fully supervised approaches. In this paper, we propose to use timestamp supervision for the temporal action segmentation task. Timestamps require a comparable annotation effort to weakly supervised approaches, and yet provide a more supervisory signal. To demonstrate the effectiveness of timestamp supervision, we propose an approach to train a segmentation model using only timestamps annotations. Our approach uses the model output and the annotated timestamps to generate frame-wise labels by detecting the action changes. We further introduce a confidence loss that forces the predicted probabilities to monotonically decrease as the distance to the timestamps increases. This ensures that all and not only the most distinctive frames of an action are learned during training. The evaluation on four datasets shows that models trained with timestamps annotations achieve comparable performance to the fully supervised approaches.

PDF Abstract CVPR 2021 PDF CVPR 2021 Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Weakly Supervised Action Localization BEOID Li et al. mAP@0.1:0.7 34.4 # 4
mAP@0.5 20.3 # 4
Weakly Supervised Action Localization GTEA Li et al. mAP@0.1:0.7 36.4 # 4
mAP@0.5 28.8 # 4

Methods


No methods listed for this paper. Add relevant methods here