This comparison shows that, even though FaceQgen does not surpass the best existing face quality assessment methods in terms of face recognition accuracy prediction, it achieves good enough results to demonstrate the potential of semi-supervised learning approaches for quality estimation (in particular, data-driven learning based on a single high quality image per subject), having the capacity to improve its performance in the future with adequate refinement of the model and the significant advantage over competing methods of not needing quality labels for its development.
Cancelable biometrics refers to a group of techniques in which the biometric inputs are transformed intentionally using a key before processing or storage.
This paper is the first to explore an automatic way to detect bias in deep convolutional neural networks by simply looking at their weights.
This work presents a new deep learning approach for keystroke biometrics based on a novel Distance Metric Learning method (DML).
This work explores facial expression bias as a security vulnerability of face recognition systems.
With the aim of studying how current multimodal AI algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data, this demonstrator experiments over an automated recruitment testbed based on Curriculum Vitae: FairCVtest.
We propose a discrimination-aware learning method to improve both accuracy and fairness of biased face recognition algorithms.
With the aim of studying how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data, we propose a fictitious automated recruitment testbed: FairCVtest.
We analyze how bias affects deep learning processes through a toy example using the MNIST database and a case study in gender detection from face images.
We experimentally show that demographic groups highly represented in popular face databases have led to popular pre-trained deep face models presenting strong algorithmic discrimination.