VideoLSTM Convolves, Attends and Flows for Action Recognition

6 Jul 2016Zhenyang LiEfstratios GavvesMihir JainCees G. M. Snoek

We present a new architecture for end-to-end sequence learning of actions in video, we call VideoLSTM. Rather than adapting the video to the peculiarities of established recurrent or convolutional architectures, we adapt the architecture to fit the requirements of the video medium... (read more)

PDF Abstract

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper