Multimodal Activity Recognition
12 papers with code • 10 benchmarks • 7 datasets
This task has no description! Would you like to contribute one?
Most implemented papers
Fusion-GCN: Multimodal Action Recognition using Graph Convolutional Networks
In this paper, we present Fusion-GCN, an approach for multimodal action recognition using Graph Convolutional Networks (GCNs).
OPERAnet: A Multimodal Activity Recognition Dataset Acquired from Radio Frequency and Vision-based Sensors
This dataset can be exploited to advance WiFi and vision-based HAR, for example, using pattern recognition, skeletal representation, deep learning algorithms or other novel approaches to accurately recognize human activities.