Spatio-Temporal Action Localization
13 papers with code • 1 benchmarks • 6 datasets
Latest papers
VideoMAE V2: Scaling Video Masked Autoencoders with Dual Masking
Finally, we successfully train a video ViT model with a billion parameters, which achieves a new state-of-the-art performance on the datasets of Kinetics (90. 0% on K400 and 89. 9% on K600) and Something-Something (68. 7% on V1 and 77. 0% on V2).
Unmasked Teacher: Towards Training-Efficient Video Foundation Models
Previous VFMs rely on Image Foundation Models (IFMs), which face challenges in transferring to the video domain.
InternVideo: General Video Foundation Models via Generative and Discriminative Learning
Specifically, InternVideo efficiently explores masked video modeling and video-language contrastive learning as the pretraining objectives, and selectively coordinates video representations of these two complementary frameworks in a learnable manner to boost various video applications.
E^2TAD: An Energy-Efficient Tracking-based Action Detector
Video action detection (spatio-temporal action localization) is usually the starting point for human-centric intelligent analysis of videos nowadays.
Contextualized Spatio-Temporal Contrastive Learning with Self-Supervision
Modern self-supervised learning algorithms typically enforce persistency of instance representations across views.
KORSAL: Key-point Detection based Online Real-Time Spatio-Temporal Action Localization
Despite the simplicity of our approach, our lightweight end-to-end architecture achieves state-of-the-art frame-mAP of 74. 7% on the challenging UCF101-24 dataset, demonstrating a performance gain of 6. 4% over the previous best online methods.
ST-HOI: A Spatial-Temporal Baseline for Human-Object Interaction Detection in Videos
Detecting human-object interactions (HOI) is an important step toward a comprehensive visual understanding of machines.
1st place solution for AVA-Kinetics Crossover in AcitivityNet Challenge 2020
This technical report introduces our winning solution to the spatio-temporal action localization track, AVA-Kinetics Crossover, in ActivityNet Challenge 2020.
Actor-Context-Actor Relation Network for Spatio-Temporal Action Localization
We propose to explicitly model the Actor-Context-Actor Relation, which is the relation between two actors based on their interactions with the context.
Video action detection by learning graph-based spatio-temporal interactions
Action Detection is a complex task that aims to detect and classify human actions in video clips.