8 papers with code • 2 benchmarks • 3 datasets
In this work, we propose SoccerNet-v2, a novel large-scale corpus of manual annotations for the SoccerNet video dataset, along with open challenges to encourage more research in soccer understanding and broadcast production.
A total of 6, 637 temporal annotations are automatically parsed from online match reports at a one minute resolution for three main classes of events (Goal, Yellow/Red Card, and Substitution).
Feature Combination Meets Attention: Baidu Soccer Embeddings and Transformer based Temporal Detection
With rapidly evolving internet technologies and emerging tools, sports related videos generated online are increasing at an unprecedentedly fast pace.
To address this need, we propose the new problem of action spotting in video, which we define as finding a specific action in a video while observing a small portion of that video.
We benchmark our loss on a large dataset of soccer videos, SoccerNet, and achieve an improvement of 12. 8% over the baseline.
In this paper, we focus our analysis on action spotting in soccer broadcast, which consists in temporally localizing the main actions in a soccer game.
Action spotting in soccer videos is the task of identifying the specific time when a certain key action of the game occurs.