Action Segmentation with Joint Self-Supervised Temporal Domain Adaptation

Despite the recent progress of fully-supervised action segmentation techniques, the performance is still not fully satisfactory. One main challenge is the problem of spatiotemporal variations (e.g. different people may perform the same activity in various ways). Therefore, we exploit unlabeled videos to address this problem by reformulating the action segmentation task as a cross-domain problem with domain discrepancy caused by spatio-temporal variations. To reduce the discrepancy, we propose Self-Supervised Temporal Domain Adaptation (SSTDA), which contains two self-supervised auxiliary tasks (binary and sequential domain prediction) to jointly align cross-domain feature spaces embedded with local and global temporal dynamics, achieving better performance than other Domain Adaptation (DA) approaches. On three challenging benchmark datasets (GTEA, 50Salads, and Breakfast), SSTDA outperforms the current state-of-the-art method by large margins (e.g. for the F1@25 score, from 59.6% to 69.1% on Breakfast, from 73.4% to 81.5% on 50Salads, and from 83.6% to 89.1% on GTEA), and requires only 65% of the labeled training data for comparable performance, demonstrating the usefulness of adapting to unlabeled target videos across variations. The source code is available at

PDF Abstract CVPR 2020 PDF CVPR 2020 Abstract

Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Action Segmentation 50 Salads SSTDA F1@10% 83.0 # 17
Edit 75.8 # 17
Acc 83.2 # 18
F1@25% 81.5 # 17
F1@50% 73.8 # 17
Action Segmentation Breakfast SSTDA F1@10% 75.0 # 14
F1@50% 55.2 # 17
Acc 70.2 # 17
Edit 73.7 # 15
F1@25% 69.1 # 15
Action Segmentation GTEA SSTDA F1@10% 90.0 # 14
F1@50% 78.0 # 11
Acc 79.8 # 12
Edit 86.2 # 11
F1@25% 89.1 # 10