Search Results for author: Yanyi Zhang

Found 7 papers, 0 papers with code

VidTr: Video Transformer Without Convolutions

no code implementations ICCV 2021 Yanyi Zhang, Xinyu Li, Chunhui Liu, Bing Shuai, Yi Zhu, Biagio Brattoli, Hao Chen, Ivan Marsic, Joseph Tighe

We first introduce the vanilla video transformer and show that transformer module is able to perform spatio-temporal modeling from raw pixels, but with heavy memory usage.

Action Classification Action Recognition

Multi-Label Activity Recognition using Activity-specific Features and Activity Correlations

no code implementations CVPR 2021 Yanyi Zhang, Xinyu Li, Ivan Marsic

Multi-label activity recognition is designed for recognizing multiple activities that are performed simultaneously or sequentially in each video.

Activity Recognition Video Classification

Progress Estimation and Phase Detection for Sequential Processes

no code implementations28 Feb 2017 Xinyu Li, Yanyi Zhang, Jianyu Zhang, Yueyang Chen, Shuhong Chen, Yue Gu, Moliang Zhou, Richard A. Farneth, Ivan Marsic, Randall S. Burd

For the Olympic swimming dataset, our system achieved an accuracy of 88%, an F1-score of 0. 58, a completeness estimation error of 6. 3% and a remaining-time estimation error of 2. 9 minutes.

Activity Recognition Multimodal Deep Learning

Online People Tracking and Identification with RFID and Kinect

no code implementations10 Feb 2017 Xinyu Li, Yanyi Zhang, Ivan Marsic, Randall S. Burd

We introduce a novel, accurate and practical system for real-time people tracking and identification.


Concurrent Activity Recognition with Multimodal CNN-LSTM Structure

no code implementations6 Feb 2017 Xinyu Li, Yanyi Zhang, Jianyu Zhang, Shuhong Chen, Ivan Marsic, Richard A. Farneth, Randall S. Burd

Our system is the first to address the concurrent activity recognition with multisensory data using a single model, which is scalable, simple to train and easy to deploy.

Concurrent Activity Recognition Decision Making

Cannot find the paper you are looking for? You can Submit a new open access paper.