Spatio-temporal Gait Feature with Global Distance Alignment

7 Mar 2022  ·  Yifan Chen, Yang Zhao, Xuelong Li ·

Gait recognition is an important recognition technology, because gait is not easy to camouflage and does not need cooperation to recognize subjects. However, many existing methods are inadequate in preserving both temporal information and fine-grained information, thus reducing its discrimination. This problem is more serious when the subjects with similar walking postures are identified. In this paper, we try to enhance the discrimination of spatio-temporal gait features from two aspects: effective extraction of spatio-temporal gait features and reasonable refinement of extracted features. Thus our method is proposed, it consists of Spatio-temporal Feature Extraction (SFE) and Global Distance Alignment (GDA). SFE uses Temporal Feature Fusion (TFF) and Fine-grained Feature Extraction (FFE) to effectively extract the spatio-temporal features from raw silhouettes. GDA uses a large number of unlabeled gait data in real life as a benchmark to refine the extracted spatio-temporal features. GDA can make the extracted features have low inter-class similarity and high intra-class similarity, thus enhancing their discrimination. Extensive experiments on mini-OUMVLP and CASIA-B have proved that we have a better result than some state-of-the-art methods.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here