28 papers with code • 0 benchmarks • 9 datasets
Detecting activities in extended videos.
This thesis explore different approaches using Convolutional and Recurrent Neural Networks to classify and temporally localize activities on videos, furthermore an implementation to achieve it has been proposed.
We also release the UCLA Protest Image Dataset, our novel dataset of 40, 764 images (11, 659 protest images and hard negatives) with various annotations of visual attributes and sentiments.
In this paper, we introduce the concept of learning latent super-events from activity videos, and present how it benefits activity detection in continuous videos.
Ranked #3 on Action Detection on Multi-THUMOS
We introduce a new convolutional layer named the Temporal Gaussian Mixture (TGM) layer and present how it can be used to efficiently capture longer-term temporal information in continuous activity videos.
Ranked #2 on Action Detection on Multi-THUMOS