Video-Language Pretraining (VLP), which aims to learn transferable representation to advance a wide range of video-text downstream tasks, has recently received increasing attention. Best performing works rely on large-scale, 3rd-person video-text datasets, such as HowTo100M. In this work, we exploit the recently released Ego4D dataset to pioneer Egocentric VLP along three directions. (i) We create EgoClip, a 1st-person video-text pretraining dataset comprising 3.8M clip-text pairs well-chosen from Ego4D, covering a large variety of human daily activities. (ii) We propose a novel pretraining objective, dubbed EgoNCE, which adapts video-text contrastive learning to the egocentric domain by mining egocentric-aware positive and negative samples. (iii) We introduce EgoMCQ, a development benchmark that is close to EgoClip and hence can support effective validation and fast exploration of our design decisions in EgoClip and EgoNCE. Furthermore, we demonstrate strong performance on five egocentric downstream tasks across three datasets: video-text retrieval on EPIC-KITCHENS-100; action recognition on Charades-Ego; natural language query, moment query, and object state change classification on Ego4D challenge benchmarks. The dataset and code are available at https://github.com/showlab/EgoVLP.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Action Recognition Charades-Ego EgoVLP mAP 32.1 # 4
Object State Change Classification Ego4D EgoVLP Acc 73.9 # 2
Natural Language Queries Ego4D EgoVLP R@1 IoU=0.3 10.46 # 5
R@5 IoU=0.3 16.76 # 4
R@1 IoU=0.5 6.24 # 5
R@5 IoU=0.5 11.29 # 4
R@1 Mean(0.3 and 0.5) 8.35 # 4
Moment Queries Ego4D EgoVLP Avg mAP (0.1-0.5) 11.39 # 5
Question Answering EgoTaskQA EgoVLP Direct 42.51 # 2
Multi-Instance Retrieval EPIC-KITCHENS-100 EgoVLP mAP(V2T) 49.9 # 5
mAP(T2V) 40.5 # 5
mAP (Avg) 45 # 8
nDCG (V2T) 60.9 # 5
nDCG (T2V) 57.9 # 4
nDCG (Avg) 59.4 # 6
Video Summarization Query-Focused Video Summarization Dataset EgoVLP F1 (avg) 49.72 # 2

Methods