Action Localization

166 papers with code • 0 benchmarks • 4 datasets

Action Localization is finding the spatial and temporal co ordinates for an action in a video. An action localization model will identify which frame an action start and ends in video and return the x,y coordinates of an action. Further the co ordinates will change when the object performing action undergoes a displacement.

Libraries

Use these libraries to find Action Localization models and implementations

Most implemented papers

AVA: A Video Dataset of Spatio-temporally Localized Atomic Visual Actions

tensorflow/models CVPR 2018

The AVA dataset densely annotates 80 atomic visual actions in 430 15-minute video clips, where actions are localized in space and time, resulting in 1. 58M action labels with multiple labels per person occurring frequently.

You Only Watch Once: A Unified CNN Architecture for Real-Time Spatiotemporal Action Localization

wei-tim/YOWO 15 Nov 2019

YOWO is a single-stage architecture with two branches to extract temporal and spatial information concurrently and predict bounding boxes and action probabilities directly from video clips in one evaluation.

ACGNet: Action Complement Graph Network for Weakly-supervised Temporal Action Localization

xjtupanda/ACGNet 21 Dec 2021

Weakly-supervised temporal action localization (WTAL) in untrimmed videos has emerged as a practical but challenging task since only video-level labels are available.

HowTo100M: Learning a Text-Video Embedding by Watching Hundred Million Narrated Video Clips

antoine77340/MIL-NCE_HowTo100M ICCV 2019

In this work, we propose instead to learn such embeddings from video data with readily available natural language annotations in the form of automatically transcribed narrations.

Recognition of Instrument-Tissue Interactions in Endoscopic Videos via Action Triplets

camma-public/tripnet 10 Jul 2020

Recognition of surgical activity is an essential component to develop context-aware decision support for the operating room.

Hide-and-Seek: Forcing a Network to be Meticulous for Weakly-supervised Object and Action Localization

zhengshou/AutoLoc ICCV 2017

We propose `Hide-and-Seek', a weakly-supervised framework that aims to improve object localization in images and action localization in videos.

Weakly Supervised Action Localization by Sparse Temporal Pooling Network

demianzhang/weakly-action-localization CVPR 2018

We propose a weakly supervised temporal action localization algorithm on untrimmed videos using convolutional neural networks.

Actor-Context-Actor Relation Network for Spatio-Temporal Action Localization

Siyu-C/ACAR-Net CVPR 2021

We propose to explicitly model the Actor-Context-Actor Relation, which is the relation between two actors based on their interactions with the context.

1st place solution for AVA-Kinetics Crossover in AcitivityNet Challenge 2020

Siyu-C/ACAR-Net 16 Jun 2020

This technical report introduces our winning solution to the spatio-temporal action localization track, AVA-Kinetics Crossover, in ActivityNet Challenge 2020.