Multimodal Activity Recognition
12 papers with code • 10 benchmarks • 7 datasets
Latest papers with no code
MuMu: Cooperative Multitask Learning-based Guided Multimodal Fusion
However, it is challenging to extract robust multimodal representations due to the heterogeneous characteristics of data from multimodal sensors and disparate human activities, especially in the presence of noisy and misaligned sensor data.
Multi-GAT: A Graphical Attention-based Hierarchical Multimodal Representation Learning Approach for Human Activity Recognition
Finally, the experimental results with noisy sensor data indicate that Multi-GAT consistently outperforms all the evaluated baselines.
MMAct: A Large-Scale Dataset for Cross Modal Human Action Understanding
Unlike vision modalities, body-worn sensors or passive sensing can avoid the failure of action understanding in vision related challenges, e. g. occlusion and appearance variation.
Activity recognition using ST-GCN with 3D motion data
The recognition model with a tree-structure graph was then created.
Nurse care activity recognition challenge: summary and results
To promote research in such scenarios, we organized the Open Lab Nursing Activity Recognition Challenge focusing on the recognition of complex activities related to the nursing domain.
Can a simple approach identify complex nurse care activity?
For the last two decades, more and more complex methods have been developed to identify human activities using various types of sensors, e. g., data from motion capture, accelerometer, and gyroscopes sensors.
Autonomous Human Activity Classification from Ego-vision Camera and Accelerometer Data
For instance, the sitting activity can be detected by IMU data, but it cannot be determined whether the subject has sat on a chair or a sofa, or where the subject is.
STAR-Net: Action Recognition using Spatio-Temporal Activation Reprojection
As such, there has been recent interest on human action recognition using low-cost, readily-available RGB cameras via deep convolutional neural networks.
Adaptive Feature Processing for Robust Human Activity Recognition on a Novel Multi-Modal Dataset
In this paper, we present a novel, multi-modal sensor dataset that encompasses nine indoor activities, performed by 16 participants, and captured by four types of sensors that are commonly used in indoor applications and autonomous vehicles.
Action Machine: Rethinking Action Recognition in Trimmed Videos
On NTU RGB-D, Action Machine achieves the state-of-the-art performance with top-1 accuracies of 97. 2% and 94. 3% on cross-view and cross-subject respectively.