Search Results for author: AJ Piergiovanni

Found 27 papers, 11 papers with code

AssembleNet++: Assembling Modality Representations via Attention Connections - Supplementary Material -

no code implementations ECCV 2020 Michael S. Ryoo, AJ Piergiovanni, Juhana Kangaspunta, Anelia Angelova

We create a family of powerful video models which are able to: (i) learn interactions between semantic object information and raw appearance and motion features, and (ii) deploy attention in order to better learn the importance of features at each convolutional block of the network.

Activity Recognition

4D-Net for Learned Multi-Modal Alignment

no code implementations2 Sep 2021 AJ Piergiovanni, Vincent Casser, Michael S. Ryoo, Anelia Angelova

We present 4D-Net, a 3D object detection approach, which utilizes 3D Point Cloud and RGB sensing information, both in time.

3D Object Detection

Unsupervised Discovery of Actions in Instructional Videos

no code implementations28 Jun 2021 AJ Piergiovanni, Anelia Angelova, Michael S. Ryoo, Irfan Essa

In this paper we address the problem of automatically discovering atomic actions in unsupervised manner from instructional videos.

TokenLearner: What Can 8 Learned Tokens Do for Images and Videos?

no code implementations21 Jun 2021 Michael S. Ryoo, AJ Piergiovanni, Anurag Arnab, Mostafa Dehghani, Anelia Angelova

In this paper, we introduce a novel visual representation learning which relies on a handful of adaptively learned tokens, and which is applicable to both image and video understanding tasks.

Action Classification Image Classification +3

Unsupervised Action Segmentation for Instructional Videos

no code implementations7 Jun 2021 AJ Piergiovanni, Anelia Angelova, Michael S. Ryoo, Irfan Essa

In this paper we address the problem of automatically discovering atomic actions in unsupervised manner from instructional videos, which are rarely annotated with atomic actions.

Action Segmentation

Adaptive Intermediate Representations for Video Understanding

no code implementations14 Apr 2021 Juhana Kangaspunta, AJ Piergiovanni, Rico Jonschkowski, Michael Ryoo, Anelia Angelova

A common strategy to video understanding is to incorporate spatial and motion information by fusing features derived from RGB frames and optical flow.

Action Classification Optical Flow Estimation +2

AssembleNet++: Assembling Modality Representations via Attention Connections

1 code implementation18 Aug 2020 Michael S. Ryoo, AJ Piergiovanni, Juhana Kangaspunta, Anelia Angelova

We create a family of powerful video models which are able to: (i) learn interactions between semantic object information and raw appearance and motion features, and (ii) deploy attention in order to better learn the importance of features at each convolutional block of the network.

Action Classification Activity Recognition

AttentionNAS: Spatiotemporal Attention Cell Search for Video Classification

no code implementations ECCV 2020 Xiaofang Wang, Xuehan Xiong, Maxim Neumann, AJ Piergiovanni, Michael S. Ryoo, Anelia Angelova, Kris M. Kitani, Wei Hua

The discovered attention cells can be seamlessly inserted into existing backbone networks, e. g., I3D or S3D, and improve video classification accuracy by more than 2% on both Kinetics-600 and MiT datasets.

Classification General Classification +1

AViD Dataset: Anonymized Videos from Diverse Countries

1 code implementation NeurIPS 2020 AJ Piergiovanni, Michael S. Ryoo

We confirm that most of the existing video datasets are statistically biased to only capture action videos from a limited number of countries.

Action Classification Action Detection +1

Tiny Video Networks

3 code implementations15 Oct 2019 AJ Piergiovanni, Anelia Angelova, Michael S. Ryoo

Video understanding is a challenging problem with great impact on the abilities of autonomous agents working in the real-world.

Video Understanding

Model-based Behavioral Cloning with Future Image Similarity Learning

1 code implementation8 Oct 2019 Alan Wu, AJ Piergiovanni, Michael S. Ryoo

We present a visual imitation learning framework that enables learning of robot action policies solely based on expert samples without any robot trials.

Imitation Learning

Unseen Action Recognition with Unpaired Adversarial Multimodal Learning

no code implementations ICLR 2019 AJ Piergiovanni, Michael S. Ryoo

In this paper, we present a method to learn a joint multimodal representation space that allows for the recognition of unseen activities in videos.

Action Recognition General Classification

Differentiable Grammars for Videos

no code implementations1 Feb 2019 AJ Piergiovanni, Anelia Angelova, Michael S. Ryoo

This paper proposes a novel algorithm which learns a formal regular grammar from real-world continuous data, such as videos.

Representation Flow for Action Recognition

4 code implementations CVPR 2019 AJ Piergiovanni, Michael S. Ryoo

Our representation flow layer is a fully-differentiable layer designed to capture the `flow' of any representation channel within a convolutional neural network for action recognition.

Action Classification Action Recognition +4

Learning Multimodal Representations for Unseen Activities

1 code implementation21 Jun 2018 AJ Piergiovanni, Michael S. Ryoo

We present a method to learn a joint multimodal representation space that enables recognition of unseen activities in videos.

General Classification Temporal Action Localization

Learning Real-World Robot Policies by Dreaming

no code implementations20 May 2018 AJ Piergiovanni, Alan Wu, Michael S. Ryoo

Learning to control robots directly based on images is a primary challenge in robotics.

Fine-grained Activity Recognition in Baseball Videos

3 code implementations9 Apr 2018 AJ Piergiovanni, Michael S. Ryoo

In this paper, we introduce a challenging new dataset, MLB-YouTube, designed for fine-grained activity detection.

Action Detection Activity Detection +3

Temporal Gaussian Mixture Layer for Videos

1 code implementation ICLR 2019 AJ Piergiovanni, Michael S. Ryoo

We introduce a new convolutional layer named the Temporal Gaussian Mixture (TGM) layer and present how it can be used to efficiently capture longer-term temporal information in continuous activity videos.

Action Detection Activity Detection

Learning Latent Super-Events to Detect Multiple Activities in Videos

2 code implementations CVPR 2018 AJ Piergiovanni, Michael S. Ryoo

In this paper, we introduce the concept of learning latent super-events from activity videos, and present how it benefits activity detection in continuous videos.

Action Detection Activity Detection

Learning Latent Sub-events in Activity Videos Using Temporal Attention Filters

1 code implementation26 May 2016 AJ Piergiovanni, Chenyou Fan, Michael S. Ryoo

In this paper, we newly introduce the concept of temporal attention filters, and describe how they can be used for human activity recognition from videos.

Action Classification Action Recognition In Videos +1

Cannot find the paper you are looking for? You can Submit a new open access paper.