Activity Recognition
254 papers with code • 4 benchmarks • 29 datasets
Human Activity Recognition is the problem of identifying events performed by humans given a video input. It is formulated as a binary (or multiclass) classification problem of outputting activity class labels. Activity Recognition is an important problem with many societal applications including smart surveillance, video search/retrieval, intelligent robots, and other monitoring systems.
Source: Learning Latent Sub-events in Activity Videos Using Temporal Attention Filters
Libraries
Use these libraries to find Activity Recognition models and implementationsDatasets
Subtasks
Latest papers with no code
HARMamba: Efficient Wearable Sensor Human Activity Recognition Based on Bidirectional Selective SSM
Wearable sensor-based human activity recognition (HAR) is a critical research domain in activity perception.
Emotion Recognition from the perspective of Activity Recognition
In this paper, we treat emotion recognition from the perspective of action recognition by exploring the application of deep learning architectures specifically designed for action recognition, for continuous affect recognition.
CODA: A COst-efficient Test-time Domain Adaptation Mechanism for HAR
In recent years, emerging research on mobile sensing has led to novel scenarios that enhance daily life for humans, but dynamic usage conditions often result in performance degradation when systems are deployed in real-world settings.
Spatio-Temporal Proximity-Aware Dual-Path Model for Panoramic Activity Recognition
Panoramic Activity Recognition (PAR) seeks to identify diverse human activities across different scales, from individual actions to social group and global activities in crowded panoramic scenes.
A Survey of IMU Based Cross-Modal Transfer Learning in Human Activity Recognition
We also distinguish and expound on many related but inconsistently used terms in the literature, such as transfer learning, domain adaptation, representation learning, sensor fusion, and multimodal learning, and describe how cross-modal learning fits with all these concepts.
Generalized Relevance Learning Grassmann Quantization
The proposed model returns a set of prototype subspaces and a relevance vector.
P2LHAP:Wearable sensor-based human activity recognition, segmentation and forecast through Patch-to-Label Seq2Seq Transformer
Traditional deep learning methods struggle to simultaneously segment, recognize, and forecast human activities from sensor data.
Knowledge Transfer across Multiple Principal Component Analysis Studies
In the first step, we integrate the shared subspace information across multiple studies by a proposed method named as Grassmannian barycenter, instead of directly performing PCA on the pooled dataset.
Deep Generative Domain Adaptation with Temporal Relation Knowledge for Cross-User Activity Recognition
To bridge this gap, our study introduces a Conditional Variational Autoencoder with Universal Sequence Mapping (CVAE-USM) approach, which addresses the unique challenges of time-series domain adaptation in HAR by relaxing the i. i. d.
Cross-user activity recognition using deep domain adaptation with temporal relation information
To address this challenge, we introduce the Deep Temporal State Domain Adaptation (DTSDA) model, an innovative approach tailored for time series domain adaptation in cross-user HAR.