Search Results for author: Michael S. Ryoo

Found 43 papers, 14 papers with code

AssembleNet++: Assembling Modality Representations via Attention Connections - Supplementary Material -

no code implementations ECCV 2020 Michael S. Ryoo, AJ Piergiovanni, Juhana Kangaspunta, Anelia Angelova

We create a family of powerful video models which are able to: (i) learn interactions between semantic object information and raw appearance and motion features, and (ii) deploy attention in order to better learn the importance of features at each convolutional block of the network.

Activity Recognition

4D-Net for Learned Multi-Modal Alignment

no code implementations2 Sep 2021 AJ Piergiovanni, Vincent Casser, Michael S. Ryoo, Anelia Angelova

We present 4D-Net, a 3D object detection approach, which utilizes 3D Point Cloud and RGB sensing information, both in time.

3D Object Detection

Self-Supervised Disentangled Representation Learning for Third-Person Imitation Learning

no code implementations2 Aug 2021 Jinghuan Shang, Michael S. Ryoo

Third-person imitation learning (TPIL) is the concept of learning action policies by observing other agents in a third-person view (TPV), similar to what humans do.

Imitation Learning Representation Learning

Unsupervised Discovery of Actions in Instructional Videos

no code implementations28 Jun 2021 AJ Piergiovanni, Anelia Angelova, Michael S. Ryoo, Irfan Essa

In this paper we address the problem of automatically discovering atomic actions in unsupervised manner from instructional videos.

TokenLearner: What Can 8 Learned Tokens Do for Images and Videos?

no code implementations21 Jun 2021 Michael S. Ryoo, AJ Piergiovanni, Anurag Arnab, Mostafa Dehghani, Anelia Angelova

In this paper, we introduce a novel visual representation learning which relies on a handful of adaptively learned tokens, and which is applicable to both image and video understanding tasks.

Action Classification Image Classification +3

Unsupervised Action Segmentation for Instructional Videos

no code implementations7 Jun 2021 AJ Piergiovanni, Anelia Angelova, Michael S. Ryoo, Irfan Essa

In this paper we address the problem of automatically discovering atomic actions in unsupervised manner from instructional videos, which are rarely annotated with atomic actions.

Action Segmentation

Visionary: Vision architecture discovery for robot learning

no code implementations26 Mar 2021 Iretiayo Akinola, Anelia Angelova, Yao Lu, Yevgen Chebotar, Dmitry Kalashnikov, Jacob Varley, Julian Ibarz, Michael S. Ryoo

We propose a vision-based architecture search algorithm for robot manipulation learning, which discovers interactions between low dimension action inputs and high dimensional visual inputs.

Neural Architecture Search

Coarse-Fine Networks for Temporal Activity Detection in Videos

1 code implementation CVPR 2021 Kumara Kahatapitiya, Michael S. Ryoo

In this paper, we introduce Coarse-Fine Networks, a two-stream architecture which benefits from different abstractions of temporal resolution to learn better video representations for long-term motion.

Action Detection Activity Detection

Reducing Inference Latency with Concurrent Architectures for Image Recognition

no code implementations13 Nov 2020 Ramyad Hadidi, Jiashen Cao, Michael S. Ryoo, Hyesoon Kim

Satisfying the high computation demand of modern deep learning architectures is challenging for achieving low inference latency.

Neural Architecture Search

AssembleNet++: Assembling Modality Representations via Attention Connections

1 code implementation18 Aug 2020 Michael S. Ryoo, AJ Piergiovanni, Juhana Kangaspunta, Anelia Angelova

We create a family of powerful video models which are able to: (i) learn interactions between semantic object information and raw appearance and motion features, and (ii) deploy attention in order to better learn the importance of features at each convolutional block of the network.

Action Classification Activity Recognition

AttentionNAS: Spatiotemporal Attention Cell Search for Video Classification

no code implementations ECCV 2020 Xiaofang Wang, Xuehan Xiong, Maxim Neumann, AJ Piergiovanni, Michael S. Ryoo, Anelia Angelova, Kris M. Kitani, Wei Hua

The discovered attention cells can be seamlessly inserted into existing backbone networks, e. g., I3D or S3D, and improve video classification accuracy by more than 2% on both Kinetics-600 and MiT datasets.

Classification General Classification +1

AViD Dataset: Anonymized Videos from Diverse Countries

1 code implementation NeurIPS 2020 AJ Piergiovanni, Michael S. Ryoo

We confirm that most of the existing video datasets are statistically biased to only capture action videos from a limited number of countries.

Action Classification Action Detection +1

LCP: A Low-Communication Parallelization Method for Fast Neural Network Inference in Image Recognition

no code implementations13 Mar 2020 Ramyad Hadidi, Bahar Asgari, Jiashen Cao, Younmin Bae, Da Eun Shim, Hyojong Kim, Sung-Kyu Lim, Michael S. Ryoo, Hyesoon Kim

To benefit from available compute resources with low communication overhead, we propose the first DNN parallelization method for reducing the communication overhead in a distributed system.

Quantization

Password-conditioned Anonymization and Deanonymization with Face Identity Transformers

1 code implementation26 Nov 2019 Xiuye Gu, Weixin Luo, Michael S. Ryoo, Yong Jae Lee

Cameras are prevalent in our daily lives, and enable many useful systems built upon computer vision technologies such as smart cameras and home robots for service applications.

Tiny Video Networks

3 code implementations15 Oct 2019 AJ Piergiovanni, Anelia Angelova, Michael S. Ryoo

Video understanding is a challenging problem with great impact on the abilities of autonomous agents working in the real-world.

Video Understanding

Model-based Behavioral Cloning with Future Image Similarity Learning

1 code implementation8 Oct 2019 Alan Wu, AJ Piergiovanni, Michael S. Ryoo

We present a visual imitation learning framework that enables learning of robot action policies solely based on expert samples without any robot trials.

Imitation Learning

Unseen Action Recognition with Unpaired Adversarial Multimodal Learning

no code implementations ICLR 2019 AJ Piergiovanni, Michael S. Ryoo

In this paper, we present a method to learn a joint multimodal representation space that allows for the recognition of unseen activities in videos.

Action Recognition General Classification

Differentiable Grammars for Videos

no code implementations1 Feb 2019 AJ Piergiovanni, Anelia Angelova, Michael S. Ryoo

This paper proposes a novel algorithm which learns a formal regular grammar from real-world continuous data, such as videos.

Representation Flow for Action Recognition

4 code implementations CVPR 2019 AJ Piergiovanni, Michael S. Ryoo

Our representation flow layer is a fully-differentiable layer designed to capture the `flow' of any representation channel within a convolutional neural network for action recognition.

Action Classification Action Recognition +4

Learning Multimodal Representations for Unseen Activities

1 code implementation21 Jun 2018 AJ Piergiovanni, Michael S. Ryoo

We present a method to learn a joint multimodal representation space that enables recognition of unseen activities in videos.

General Classification Temporal Action Localization

Learning Real-World Robot Policies by Dreaming

no code implementations20 May 2018 AJ Piergiovanni, Alan Wu, Michael S. Ryoo

Learning to control robots directly based on images is a primary challenge in robotics.

Fine-grained Activity Recognition in Baseball Videos

3 code implementations9 Apr 2018 AJ Piergiovanni, Michael S. Ryoo

In this paper, we introduce a challenging new dataset, MLB-YouTube, designed for fine-grained activity detection.

Action Detection Activity Detection +3

Learning to Anonymize Faces for Privacy Preserving Action Detection

1 code implementation ECCV 2018 Zhongzheng Ren, Yong Jae Lee, Michael S. Ryoo

The end result is a video anonymizer that performs pixel-level modifications to anonymize each person's face, with minimal effect on action detection performance.

Action Detection

Joint Person Segmentation and Identification in Synchronized First- and Third-person Videos

no code implementations ECCV 2018 Mingze Xu, Chenyou Fan, Yuchen Wang, Michael S. Ryoo, David J. Crandall

In this paper, we wish to solve two specific problems: (1) given two or more synchronized third-person videos of a scene, produce a pixel-level segmentation of each visible person and identify corresponding people across different views (i. e., determine who in camera A corresponds with whom in camera B), and (2) given one or more synchronized third-person videos as well as a first-person video taken by a mobile or wearable camera, segment and identify the camera wearer in the third-person videos.

Temporal Gaussian Mixture Layer for Videos

1 code implementation ICLR 2019 AJ Piergiovanni, Michael S. Ryoo

We introduce a new convolutional layer named the Temporal Gaussian Mixture (TGM) layer and present how it can be used to efficiently capture longer-term temporal information in continuous activity videos.

Action Detection Activity Detection

Learning Latent Super-Events to Detect Multiple Activities in Videos

2 code implementations CVPR 2018 AJ Piergiovanni, Michael S. Ryoo

In this paper, we introduce the concept of learning latent super-events from activity videos, and present how it benefits activity detection in continuous videos.

Action Detection Activity Detection

Extreme Low Resolution Activity Recognition with Multi-Siamese Embedding Learning

no code implementations3 Aug 2017 Michael S. Ryoo, Kiyoon Kim, Hyun Jong Yang

This paper presents an approach for recognizing human activities from extreme low resolution (e. g., 16x12) videos.

Activity Recognition

Forecasting Hands and Objects in Future Frames

no code implementations20 May 2017 Chenyou Fan, Jangwon Lee, Michael S. Ryoo

The key idea is that (1) an intermediate representation of a convolutional object recognition model abstracts scene information in its frame and that (2) we can predict (i. e., regress) such representations corresponding to the future frames based on that of the current frame.

Object Detection Object Recognition

Identifying First-person Camera Wearers in Third-person Videos

no code implementations CVPR 2017 Chenyou Fan, Jang-Won Lee, Mingze Xu, Krishna Kumar Singh, Yong Jae Lee, David J. Crandall, Michael S. Ryoo

We consider scenarios in which we wish to perform joint scene understanding, object tracking, activity recognition, and other tasks in environments in which multiple people are wearing body-worn cameras while a third-person static camera also captures the scene.

Activity Recognition Object Tracking +1

Learning Robot Activities from First-Person Human Videos Using Convolutional Future Regression

no code implementations3 Mar 2017 Jang-Won Lee, Michael S. Ryoo

We design a new approach that allows robot learning of new activities from unlabeled human example videos.

Object Detection

Learning Social Affordance Grammar from Videos: Transferring Human Interactions to Human-Robot Interactions

no code implementations1 Mar 2017 Tianmin Shu, Xiaofeng Gao, Michael S. Ryoo, Song-Chun Zhu

In this paper, we present a general framework for learning social affordance grammar as a spatiotemporal AND-OR graph (ST-AOG) from RGB-D videos of human interactions, and transfer the grammar to humanoids to enable a real-time motion inference for human-robot interaction (HRI).

Human robot interaction

Learning Latent Sub-events in Activity Videos Using Temporal Attention Filters

1 code implementation26 May 2016 AJ Piergiovanni, Chenyou Fan, Michael S. Ryoo

In this paper, we newly introduce the concept of temporal attention filters, and describe how they can be used for human activity recognition from videos.

Action Classification Action Recognition In Videos +1

Privacy-Preserving Human Activity Recognition from Extreme Low Resolution

no code implementations12 Apr 2016 Michael S. Ryoo, Brandon Rothrock, Charles Fleming, Hyun Jong Yang

We introduce the paradigm of inverse super resolution (ISR), the concept of learning the optimal set of image transformations to generate multiple low-resolution (LR) training videos from a single video.

Activity Recognition Super-Resolution

Multi-Type Activity Recognition in Robot-Centric Scenarios

no code implementations9 Jul 2015 Ilaria Gori, J. K. Aggarwal, Larry Matthies, Michael S. Ryoo

Activity recognition is very useful in scenarios where robots interact with, monitor or assist humans.

Activity Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.