GaitMixer: Skeleton-based Gait Representation Learning via Wide-spectrum Multi-axial Mixer

27 Oct 2022  ·  Ekkasit Pinyoanuntapong, Ayman Ali, Pu Wang, Minwoo Lee, Chen Chen ·

Most existing gait recognition methods are appearance-based, which rely on the silhouettes extracted from the video data of human walking activities. The less-investigated skeleton-based gait recognition methods directly learn the gait dynamics from 2D/3D human skeleton sequences, which are theoretically more robust solutions in the presence of appearance changes caused by clothes, hairstyles, and carrying objects. However, the performance of skeleton-based solutions is still largely behind the appearance-based ones. This paper aims to close such performance gap by proposing a novel network model, GaitMixer, to learn more discriminative gait representation from skeleton sequence data. In particular, GaitMixer follows a heterogeneous multi-axial mixer architecture, which exploits the spatial self-attention mixer followed by the temporal large-kernel convolution mixer to learn rich multi-frequency signals in the gait feature maps. Experiments on the widely used gait database, CASIA-B, demonstrate that GaitMixer outperforms the previous SOTA skeleton-based methods by a large margin while achieving a competitive performance compared with the representative appearance-based solutions. Code will be available at https://github.com/exitudio/gaitmixer

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Multiview Gait Recognition CASIA-B GaitMixer Accuracy (Cross-View, Avg) 88.3 # 6
NM#5-6 94.9 # 7
BG#1-2 85.6 # 7
CL#1-2 84.5 # 5
Multiview Gait Recognition CASIA-B GaitFormer Accuracy (Cross-View, Avg) 83.4 # 8
NM#5-6 91.5 # 8
BG#1-2 81.4 # 8
CL#1-2 77.2 # 7

Methods