STARS: Self-supervised Tuning for 3D Action Recognition in Skeleton Sequences

15 Jul 2024  ·  Soroush Mehraban, Mohammad Javad Rajabi, Babak Taati ·

Self-supervised pretraining methods with masked prediction demonstrate remarkable within-dataset performance in skeleton-based action recognition. However, we show that, unlike contrastive learning approaches, they do not produce well-separated clusters. Additionally, these methods struggle with generalization in few-shot settings. To address these issues, we propose Self-supervised Tuning for 3D Action Recognition in Skeleton sequences (STARS). Specifically, STARS first uses a masked prediction stage using an encoder-decoder architecture. It then employs nearest-neighbor contrastive learning to partially tune the weights of the encoder, enhancing the formation of semantic clusters for different actions. By tuning the encoder for a few epochs, and without using hand-crafted data augmentations, STARS achieves state-of-the-art self-supervised results in various benchmarks, including NTU-60, NTU-120, and PKU-MMD. In addition, STARS exhibits significantly better results than masked prediction models in few-shot settings, where the model has not seen the actions throughout pretraining. Project page: https://soroushmehraban.github.io/stars/

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Self-supervised Skeleton-based Action Recognition NTU RGB+D STARS Accuracy (XSub) 87.1 # 1
Accuracy (XView) 90.9 # 1
Few-Shot Skeleton-Based Action Recognition NTU RGB+D 120 STARS Acc (1-shot) 63.5 # 1
Acc (2-shot) 62.2 # 1
Acc (5-shot) 65.7 # 1
Self-supervised Skeleton-based Action Recognition NTU RGB+D 120 STARS Accuracy (XSub) 79.9 # 2
Accuracy (XSet) 80.8 # 1

Methods