Action Anticipation

24 papers with code • 7 benchmarks • 8 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

Rescaling Egocentric Vision

epic-kitchens/epic-kitchens-100-annotations 23 Jun 2020

This paper introduces the pipeline to extend the largest dataset in egocentric vision, EPIC-KITCHENS.

Scaling Egocentric Vision: The EPIC-KITCHENS Dataset

epic-kitchens/epic-kitchens-55-annotations ECCV 2018

First-person vision is gaining interest as it offers a unique viewpoint on people's interaction with objects, their attention, and even intention.

What Would You Expect? Anticipating Egocentric Actions with Rolling-Unrolling LSTMs and Modality Attention

fpv-iplab/rulstm ICCV 2019

Our method is ranked first in the public leaderboard of the EPIC-Kitchens egocentric action anticipation challenge 2019.

HalluciNet-ing Spatiotemporal Representations Using a 2D-CNN

ParitoshParmar/HalluciNet 10 Dec 2019

The hallucination task is treated as an auxiliary task, which can be used with any other action related task in a multitask learning setting.

Rolling-Unrolling LSTMs for Action Anticipation from First-Person Video

antoninofurnari/rulstm 4 May 2020

The experiments show that the proposed architecture is state-of-the-art in the domain of egocentric videos, achieving top performances in the 2019 EPIC-Kitchens egocentric action anticipation challenge.

Temporal Aggregate Representations for Long-Range Video Understanding

dibschat/tempAgg ECCV 2020

Future prediction, especially in long-range videos, requires reasoning from current and past observations.

Encouraging LSTMs to Anticipate Actions Very Early

mangalutsav/Multi-Stage-LSTM-for-Action-Anticipation ICCV 2017

In contrast to the widely studied problem of recognizing an action given a complete sequence, action anticipation aims to identify the action from only partially available videos.

RED: Reinforced Encoder-Decoder Networks for Action Anticipation

rajskar/CS763Project 16 Jul 2017

RED takes multiple history representations as input and learns to anticipate a sequence of future representations.

Forecasting Human-Object Interaction: Joint Prediction of Motor Attention and Actions in First Person Video

2020aptx4869lm/Forecasting-Human-Object-Interaction-in-FPV ECCV 2020

Motivated by this, we adopt intentional hand movement as a future representation and propose a novel deep network that jointly models and predicts the egocentric hand motion, interaction hotspots and future action.

Pedestrian Action Anticipation using Contextual Feature Fusion in Stacked RNNs

aras62/SF-GRU 13 May 2020

To this end, we propose a solution for the problem of pedestrian action anticipation at the point of crossing.