A Joint Pixel and Feature Alignment Framework for Cross-dataset Palmprint Recognition

25 May 2020  ·  Huikai Shao, DEXING ZHONG ·

Deep learning-based palmprint recognition algorithms have shown great potential. Most of them are mainly focused on identifying samples from the same dataset. However, they may be not suitable for a more convenient case that the images for training and test are from different datasets, such as collected by embedded terminals and smartphones. Therefore, we propose a novel Joint Pixel and Feature Alignment (JPFA) framework for such cross-dataset palmprint recognition scenarios. Two stage-alignment is applied to obtain adaptive features in source and target datasets. 1) Deep style transfer model is adopted to convert source images into fake images to reduce the dataset gaps and perform data augmentation on pixel level. 2) A new deep domain adaptation model is proposed to extract adaptive features by aligning the dataset-specific distributions of target-source and target-fake pairs on feature level. Adequate experiments are conducted on several benchmarks including constrained and unconstrained palmprint databases. The results demonstrate that our JPFA outperforms other models to achieve the state-of-the-arts. Compared with baseline, the accuracy of cross-dataset identification is improved by up to 28.10% and the Equal Error Rate (EER) of cross-dataset verification is reduced by up to 4.69%. To make our results reproducible, the codes are publicly available at http://gr.xjtu.edu.cn/web/bell/resource.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here