Skeleton Based Action Recognition

174 papers with code • 34 benchmarks • 29 datasets

Skeleton-based Action Recognition is a computer vision task that involves recognizing human actions from a sequence of 3D skeletal joint data captured from sensors such as Microsoft Kinect, Intel RealSense, and wearable devices. The goal of skeleton-based action recognition is to develop algorithms that can understand and classify human actions from skeleton data, which can be used in various applications such as human-computer interaction, sports analysis, and surveillance.

( Image credit: View Adaptive Neural Networks for High Performance Skeleton-based Human Action Recognition )

Libraries

Use these libraries to find Skeleton Based Action Recognition models and implementations

DeGCN: Deformable Graph Convolutional Networks for Skeleton-Based Action Recognition

WoominM/DeGCN_pytorch IEEE Transactions on Image Processing 2024

Graph convolutional networks (GCN) have recently been studied to exploit the graph topology of the human body for skeleton-based action recognition.

4
25 Mar 2024

GCN-DevLSTM: Path Development for Skeleton-Based Action Recognition

deepintostreams/gcn-devlstm 22 Mar 2024

Skeleton-based action recognition (SAR) in videos is an important but challenging task in computer vision.

3
22 Mar 2024

Skeleton-Based Human Action Recognition with Noisy Labels

xuyizdby/noiseerasar 15 Mar 2024

In this study, we bridge this gap by implementing a framework that augments well-established skeleton-based human action recognition methods with label-denoising strategies from various research areas to serve as the initial benchmark.

2
15 Mar 2024

SkateFormer: Skeletal-Temporal Transformer for Human Action Recognition

KAIST-VICLab/SkateFormer 14 Mar 2024

We categorize the key skeletal-temporal relations for action recognition into a total of four distinct types.

13
14 Mar 2024

AutoGCN -- Towards Generic Human Activity Recognition with Neural Architecture Search

deepinmotion/autogcn 2 Feb 2024

This paper introduces AutoGCN, a generic Neural Architecture Search (NAS) algorithm for Human Activity Recognition (HAR) using Graph Convolution Networks (GCNs).

1
02 Feb 2024

Skeleton2vec: A Self-supervised Learning Framework with Contextualized Target Representations for Skeleton Sequence

ruizhuo-xu/skeleton2vec 1 Jan 2024

In this paper, we show that using high-level contextualized features as prediction targets can achieve superior performance.

4
01 Jan 2024

Spatial-Temporal Decoupling Contrastive Learning for Skeleton-based Human Action Recognition

libertyzsj/std-cl 23 Dec 2023

Furthermore, to explicitly exploit the latent data distributions, we employ the attentive features to contrastive learning, which models the cross-sequence semantic relations by pulling together the features from the positive pairs and pushing away the negative pairs.

5
23 Dec 2023

Navigating Open Set Scenarios for Skeleton-based Action Recognition

kpeng9510/os-sar 11 Dec 2023

In real-world scenarios, human actions often fall outside the distribution of training data, making it crucial for models to recognize known actions and reject unknown ones.

13
11 Dec 2023

STEP CATFormer: Spatial-Temporal Effective Body-Part Cross Attention Transformer for Skeleton-based Action Recognition

maclong01/STEP-CATFormer 6 Dec 2023

We think the key to skeleton-based action recognition is a skeleton hanging in frames, so we focus on how the Graph Convolutional Convolution networks learn different topologies and effectively aggregate joint features in the global temporal and local temporal.

6
06 Dec 2023

Hulk: A Universal Knowledge Translator for Human-Centric Tasks

opengvlab/humanbench 4 Dec 2023

Human-centric perception tasks, e. g., pedestrian detection, skeleton-based action recognition, and pose estimation, have wide industrial applications, such as metaverse and sports analysis.

206
04 Dec 2023