Search Results for author: Juhana Kangaspunta

Found 5 papers, 1 papers with code

AssembleNet++: Assembling Modality Representations via Attention Connections - Supplementary Material -

no code implementations ECCV 2020 Michael S. Ryoo, AJ Piergiovanni, Juhana Kangaspunta, Anelia Angelova

We create a family of powerful video models which are able to: (i) learn interactions between semantic object information and raw appearance and motion features, and (ii) deploy attention in order to better learn the importance of features at each convolutional block of the network.

Activity Recognition

Adaptive Intermediate Representations for Video Understanding

no code implementations14 Apr 2021 Juhana Kangaspunta, AJ Piergiovanni, Rico Jonschkowski, Michael Ryoo, Anelia Angelova

A common strategy to video understanding is to incorporate spatial and motion information by fusing features derived from RGB frames and optical flow.

Action Classification Optical Flow Estimation +3

AssembleNet++: Assembling Modality Representations via Attention Connections

1 code implementation18 Aug 2020 Michael S. Ryoo, AJ Piergiovanni, Juhana Kangaspunta, Anelia Angelova

We create a family of powerful video models which are able to: (i) learn interactions between semantic object information and raw appearance and motion features, and (ii) deploy attention in order to better learn the importance of features at each convolutional block of the network.

Action Classification Activity Recognition

SPIN: A High Speed, High Resolution Vision Dataset for Tracking and Action Recognition in Ping Pong

no code implementations13 Dec 2019 Steven Schwarcz, Peng Xu, David D'Ambrosio, Juhana Kangaspunta, Anelia Angelova, Huong Phan, Navdeep Jaitly

The corpus consists of ping pong play with three main annotation streams that can be used to learn tracking and action recognition models -- tracking of the ping pong ball and poses of humans in the videos and the spin of the ball being hit by humans.

Action Recognition Pose Estimation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.