Bridge-Prompt: Towards Ordinal Action Understanding in Instructional Videos

Action recognition models have shown a promising capability to classify human actions in short video clips. In a real scenario, multiple correlated human actions commonly occur in particular orders, forming semantically meaningful human activities. Conventional action recognition approaches focus on analyzing single actions. However, they fail to fully reason about the contextual relations between adjacent actions, which provide potential temporal logic for understanding long videos. In this paper, we propose a prompt-based framework, Bridge-Prompt (Br-Prompt), to model the semantics across adjacent actions, so that it simultaneously exploits both out-of-context and contextual information from a series of ordinal actions in instructional videos. More specifically, we reformulate the individual action labels as integrated text prompts for supervision, which bridge the gap between individual action semantics. The generated text prompts are paired with corresponding video clips, and together co-train the text encoder and the video encoder via a contrastive approach. The learned vision encoder has a stronger capability for ordinal-action-related downstream tasks, e.g. action segmentation and human activity recognition. We evaluate the performances of our approach on several video datasets: Georgia Tech Egocentric Activities (GTEA), 50Salads, and the Breakfast dataset. Br-Prompt achieves state-of-the-art on multiple benchmarks. Code is available at

PDF Abstract CVPR 2022 PDF CVPR 2022 Abstract

Results from the Paper

Ranked #4 on Action Segmentation on GTEA (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Action Segmentation 50 Salads Br-Prompt+ASFormer F1@10% 89.2 # 5
Edit 83.8 # 6
Acc 88.1 # 5
F1@25% 87.8 # 5
F1@50% 81.3 # 7
Action Segmentation GTEA Br-Prompt+ASFormer F1@10% 94.1 # 3
F1@50% 83.0 # 4
Acc 81.2 # 6
Edit 91.6 # 4
F1@25% 92.0 # 3


No methods listed for this paper. Add relevant methods here