Robust Cross-View Gait Recognition with Evidence: A Discriminant Gait GAN (DiGGAN) Approach

26 Nov 2018  ·  BingZhang Hu, Yu Guan, Yan Gao, Yang Long, Nicholas Lane, Thomas Ploetz ·

Gait as a biometric trait has attracted much attention in many security and privacy applications such as identity recognition and authentication, during the last few decades. Because of its nature as a long-distance biometric trait, gait can be easily collected and used to identify individuals non-intrusively through CCTV cameras. However, it is very difficult to develop robust automated gait recognition systems, since gait may be affected by many covariate factors such as clothing, walking speed, camera view angle etc. Out of them, large view angle changes has been deemed as the most challenging factor as it can alter the overall gait appearance substantially. Existing works on gait recognition are far from enough to provide satisfying performances because of such view changes. Furthermore, very few works have considered evidences -- the demonstrable information revealing the reliabilities of decisions, which are regarded as important demands in machine learning-based recognition/authentication applications. To address these issues, in this paper we propose a Discriminant Gait Generative Adversarial Network, namely DiGGAN, which can effectively extract view-invariant features for cross-view gait recognition; and more importantly, to transfer gait images to different views -- serving as evidences and showing how the decisions have been made. Quantitative experiments have been conducted on the two most popular cross-view gait datasets, the OU-MVLP and CASIA-B, where the proposed DiGGAN has outperformed state-of-the-art methods. Qualitative analysis has also been provided and demonstrates the proposed DiGGAN's capability in providing evidences.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here