Temporal Attentive Alignment for Large-Scale Video Domain Adaptation

Although various image-based domain adaptation (DA) techniques have been proposed in recent years, domain shift in videos is still not well-explored. Most previous works only evaluate performance on small-scale datasets which are saturated. Therefore, we first propose two large-scale video DA datasets with much larger domain discrepancy: UCF-HMDB_full and Kinetics-Gameplay. Second, we investigate different DA integration methods for videos, and show that simultaneously aligning and learning temporal dynamics achieves effective alignment even without sophisticated DA methods. Finally, we propose Temporal Attentive Adversarial Adaptation Network (TA3N), which explicitly attends to the temporal dynamics using domain discrepancy for more effective domain alignment, achieving state-of-the-art performance on four video DA datasets (e.g. 7.9% accuracy gain over "Source only" from 73.9% to 81.8% on "HMDB --> UCF", and 10.3% gain on "Kinetics --> Gameplay"). The code and data are released at http://github.com/cmhungsteve/TA3N.

PDF Abstract ICCV 2019 PDF ICCV 2019 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Unsupervised Domain Adaptation EPIC-KITCHENS-100 TA3N Average Accuracy 39.9 # 3
Domain Adaptation HMDB --> UCF (full) TA3N Accuracy 81.79 # 3
Unsupervised Domain Adaptation Jester (Gesture Recognition) TA3N Accuracy 55.5 # 3
Unsupervised Domain Adaptation UCF-HMDB TA3N Accuracy 81.38 # 4
Domain Adaptation UCF --> HMDB (full) TA3N Accuracy 78.33 # 4

Results from Other Papers


Task Dataset Model Metric Name Metric Value Rank Source Paper Compare
Unsupervised Domain Adaptation HMDB-UCF TA3N Accuracy 90.54 # 4

Methods


No methods listed for this paper. Add relevant methods here