Language-based Action Concept Spaces Improve Video Self-Supervised Learning

NeurIPS 2023  ·  Kanchana Ranasinghe, Michael Ryoo ·

Recent contrastive language image pre-training has led to learning highly transferable and robust image representations. However, adapting these models to video domains with minimal supervision remains an open problem. We explore a simple step in that direction, using language tied self-supervised learning to adapt an image CLIP model to the video domain. A backbone modified for temporal modeling is trained under self-distillation settings with train objectives operating in an action concept space. Feature vectors of various action concepts extracted from a language encoder using relevant textual prompts construct this space. We introduce two train objectives, concept distillation and concept alignment, that retain generality of original representations while enforcing relations between actions and their attributes. Our approach improves zero-shot and linear probing performance on three action recognition benchmarks.

PDF Abstract NeurIPS 2023 PDF NeurIPS 2023 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Self-Supervised Action Recognition Linear HMDB51 LSS Top-1 Accuracy 69.4 # 1
Self-Supervised Action Recognition Linear Kinetics-400 LSS Top-1 Accuracy 67.3 # 3
Self-Supervised Action Recognition Linear UCF101 LSS Top-1 Accuracy 91.1 # 1

Methods