Gait Recognition in the Wild
8 papers with code • 1 benchmarks • 1 datasets
Gait Recognition in the Wild refers to methods under real-world senses, i.e., unconstrained environment.
Most implemented papers
Gait Recognition in the Wild with Dense 3D Representations and A Benchmark
Based on Gait3D, we comprehensively compare our method with existing gait recognition approaches, which reflects the superior performance of our framework and the potential of 3D representations for gait recognition in the wild.
Gait Recognition in the Wild: A Large-scale Benchmark and NAS-based Baseline
To the best of our knowledge, this is the first large-scale dataset for gait recognition in the wild.
Gait Recognition in the Wild with Multi-hop Temporal Switch
Current methods that obtain state-of-the-art performance on in-the-lab benchmarks achieve much worse accuracy on the recently proposed in-the-wild datasets because these methods can hardly model the varied temporal dynamics of gait sequences in unconstrained scenes.
LidarGait: Benchmarking 3D Gait Recognition with Point Clouds
Video-based gait recognition has achieved impressive results in constrained scenarios.
Hierarchical Spatio-Temporal Representation Learning for Gait Recognition
While current methods focus on exploiting body part-based representations, they often neglect the hierarchical dependencies between local motion patterns.
Parsing is All You Need for Accurate Gait Recognition in the Wild
Furthermore, due to the lack of suitable datasets, we build the first parsing-based dataset for gait recognition in the wild, named Gait3D-Parsing, by extending the large-scale and challenging Gait3D dataset.
GLGait: A Global-Local Temporal Receptive Field Network for Gait Recognition in the Wild
Recently, some Convolution Neural Networks (ConvNets) based methods have been proposed to address the issue of gait recognition in the wild.
It Takes Two: Accurate Gait Recognition in the Wild via Cross-granularity Alignment
In particular, the GCM aims to enhance the quality of parsing features by leveraging global features from silhouettes, while the PCM aligns the dynamics of human parts between silhouette and parsing features using the high information entropy in parsing sequences.