14 papers with code • 2 benchmarks • 3 datasets
LibrariesUse these libraries to find Action Spotting models and implementations
In this work, we propose SoccerNet-v2, a novel large-scale corpus of manual annotations for the SoccerNet video dataset, along with open challenges to encourage more research in soccer understanding and broadcast production.
A total of 6, 637 temporal annotations are automatically parsed from online match reports at a one minute resolution for three main classes of events (Goal, Yellow/Red Card, and Substitution).
Feature Combination Meets Attention: Baidu Soccer Embeddings and Transformer based Temporal Detection
With rapidly evolving internet technologies and emerging tools, sports related videos generated online are increasing at an unprecedentedly fast pace.
We introduce the task of spotting temporally precise, fine-grained events in video (detecting the precise moment in time events occur).
To address this need, we propose the new problem of action spotting in video, which we define as finding a specific action in a video while observing a small portion of that video.
We benchmark our loss on a large dataset of soccer videos, SoccerNet, and achieve an improvement of 12. 8% over the baseline.
In this paper, we focus our analysis on action spotting in soccer broadcast, which consists in temporally localizing the main actions in a soccer game.
We present a model for temporally precise action spotting in videos, which uses a dense set of detection anchors, predicting a detection confidence and corresponding fine-grained temporal displacement for each anchor.
Our submission was based on a recently proposed method which focuses on increasing temporal precision via a densely sampled set of detection anchors.