Multimodal Activity Recognition

12 papers with code • 10 benchmarks • 7 datasets

This task has no description! Would you like to contribute one?

Latest papers with no code

MuMu: Cooperative Multitask Learning-based Guided Multimodal Fusion

no code yet • AAAI 2022

However, it is challenging to extract robust multimodal representations due to the heterogeneous characteristics of data from multimodal sensors and disparate human activities, especially in the presence of noisy and misaligned sensor data.

Multi-GAT: A Graphical Attention-based Hierarchical Multimodal Representation Learning Approach for Human Activity Recognition

no code yet • IEEE ROBOTICS AND AUTOMATION LETTERS 2021

Finally, the experimental results with noisy sensor data indicate that Multi-GAT consistently outperforms all the evaluated baselines.

MMAct: A Large-Scale Dataset for Cross Modal Human Action Understanding

no code yet • ICCV 2019

Unlike vision modalities, body-worn sensors or passive sensing can avoid the failure of action understanding in vision related challenges, e. g. occlusion and appearance variation.

Activity recognition using ST-GCN with 3D motion data

no code yet • UbiComp/ISWC '19 Adjunct, 2019

The recognition model with a tree-structure graph was then created.

Nurse care activity recognition challenge: summary and results

no code yet • UbiComp/ISWC '19 Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers 2019

To promote research in such scenarios, we organized the Open Lab Nursing Activity Recognition Challenge focusing on the recognition of complex activities related to the nursing domain.

Can a simple approach identify complex nurse care activity?

no code yet • UbiComp/ISWC '19 Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers 2019

For the last two decades, more and more complex methods have been developed to identify human activities using various types of sensors, e. g., data from motion capture, accelerometer, and gyroscopes sensors.

Autonomous Human Activity Classification from Ego-vision Camera and Accelerometer Data

no code yet • 28 May 2019

For instance, the sitting activity can be detected by IMU data, but it cannot be determined whether the subject has sat on a chair or a sofa, or where the subject is.

STAR-Net: Action Recognition using Spatio-Temporal Activation Reprojection

no code yet • 26 Feb 2019

As such, there has been recent interest on human action recognition using low-cost, readily-available RGB cameras via deep convolutional neural networks.

Adaptive Feature Processing for Robust Human Activity Recognition on a Novel Multi-Modal Dataset

no code yet • 9 Jan 2019

In this paper, we present a novel, multi-modal sensor dataset that encompasses nine indoor activities, performed by 16 participants, and captured by four types of sensors that are commonly used in indoor applications and autonomous vehicles.

Action Machine: Rethinking Action Recognition in Trimmed Videos

no code yet • 14 Dec 2018

On NTU RGB-D, Action Machine achieves the state-of-the-art performance with top-1 accuracies of 97. 2% and 94. 3% on cross-view and cross-subject respectively.