Action Segmentation with Mixed Temporal Domain Adaptation

15 Apr 2021  ·  Min-Hung Chen, Baopu Li, Yingze Bao, Ghassan AlRegib ·

The main progress for action segmentation comes from densely-annotated data for fully-supervised learning. Since manual annotation for frame-level actions is time-consuming and challenging, we propose to exploit auxiliary unlabeled videos, which are much easier to obtain, by shaping this problem as a domain adaptation (DA) problem. Although various DA techniques have been proposed in recent years, most of them have been developed only for the spatial direction. Therefore, we propose Mixed Temporal Domain Adaptation (MTDA) to jointly align frame- and video-level embedded feature spaces across domains, and further integrate with the domain attention mechanism to focus on aligning the frame-level features with higher domain discrepancy, leading to more effective domain adaptation. Finally, we evaluate our proposed methods on three challenging datasets (GTEA, 50Salads, and Breakfast), and validate that MTDA outperforms the current state-of-the-art methods on all three datasets by large margins (e.g. 6.4% gain on F1@50 and 6.8% gain on the edit score for GTEA).

PDF Abstract

Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Action Segmentation 50 Salads DA F1@10% 82.0 # 19
Edit 75.2 # 18
Acc 83.2 # 18
F1@25% 80.1 # 19
F1@50% 72.5 # 19
Action Segmentation Breakfast DA F1@10% 74.2 # 16
F1@50% 56.5 # 13
Acc 71.0 # 11
Edit 73.6 # 15
F1@25% 68.6 # 18
Action Segmentation GTEA DA F1@10% 90.5 # 11
F1@50% 76.2 # 16
Acc 80.0 # 11
Edit 85.8 # 13
F1@25% 88.4 # 14


No methods listed for this paper. Add relevant methods here