Cross-Sensor Periocular Biometrics in a Global Pandemic: Comparative Benchmark and Novel Multialgorithmic Approach

The massive availability of cameras results in a wide variability of imaging conditions, producing large intra-class variations and a significant performance drop if heterogeneous images are compared for person recognition. However, as biometrics is deployed, it is common to replace damaged or obsolete hardware, or to exchange information between heterogeneous applications. Variations in spectral bands can also occur. For example, surveillance face images (typically acquired in the visible spectrum, VIS) may need to be compared against a legacy iris database (typically acquired in near-infrared, NIR). Here, we propose a multialgorithmic approach to cope with periocular images from different sensors. With face masks in the front line against COVID-19, periocular recognition is regaining popularity since it is the only face region that remains visible. We integrate different comparators with a fusion scheme based on linear logistic regression, in which scores are represented by log-likelihood ratios. This allows easy interpretation of scores and the use of Bayes thresholds for optimal decision-making since scores from different comparators are in the same probabilistic range. We evaluate our approach in the context of the Cross-Eyed Competition, whose aim was to compare recognition approaches when NIR and VIS periocular images are matched. Our approach achieves EER=0.2% and FRR of just 0.47% at FAR=0.01%, representing the best overall approach of the competition. Experiments are also reported with a database of VIS images from different smartphones. We also discuss the impact of template size and computation times, with the most computationally heavy comparator playing an important role in the results. Lastly, the proposed method is shown to outperform other popular fusion approaches, such as the average of scores, SVMs or Random Forest.

PDF Abstract


  Add Datasets introduced or used in this paper

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.


No methods listed for this paper. Add relevant methods here