no code implementations • 5 Jun 2023 • Alejandro Peña, Aythami Morales, Julian Fierrez, Ignacio Serna, Javier Ortega-Garcia, Iñigo Puente, Jorge Cordova, Gonzalo Cordova
The analysis of public affairs documents is crucial for citizens as it promotes transparency, accountability, and informed decision-making.
no code implementations • 26 Apr 2023 • Daniel DeAlcala, Ignacio Serna, Aythami Morales, Julian Fierrez, Javier Ortega-Garcia
This risk assessment should address, among others, the detection and mitigation of bias in AI.
no code implementations • 17 Feb 2023 • Mahdi Ghafourian, Julian Fierrez, Ruben Vera-Rodriguez, Aythami Morales, Ignacio Serna
Cancelable biometrics are a group of techniques to transform the input biometric to an irreversible feature intentionally using a transformation function and usually a key in order to provide security and privacy in biometric recognition systems.
1 code implementation • 13 Feb 2023 • Alejandro Peña, Ignacio Serna, Aythami Morales, Julian Fierrez, Alfonso Ortega, Ainhoa Herrarte, Manuel Alcantara, Javier Ortega-Garcia
With the aim of studying how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data, we propose a fictitious case study focused on automated recruitment: FairCVtest.
1 code implementation • 3 Jan 2022 • Javier Hernandez-Ortega, Julian Fierrez, Ignacio Serna, Aythami Morales
This comparison shows that, even though FaceQgen does not surpass the best existing face quality assessment methods in terms of face recognition accuracy prediction, it achieves good enough results to demonstrate the potential of semi-supervised learning approaches for quality estimation (in particular, data-driven learning based on a single high quality image per subject), having the capacity to improve its performance in the future with adequate refinement of the model and the significant advantage over competing methods of not needing quality labels for its development.
no code implementations • 25 Nov 2021 • Mahdi Ghafourian, Julian Fierrez, Ruben Vera-Rodriguez, Ignacio Serna, Aythami Morales
Cancelable biometrics refers to a group of techniques in which the biometric inputs are transformed intentionally using a key before processing or storage.
1 code implementation • 9 Sep 2021 • Ignacio Serna, Daniel DeAlcala, Aythami Morales, Julian Fierrez, Javier Ortega-Garcia
This paper is the first to explore an automatic way to detect bias in deep convolutional neural networks by simply looking at their weights.
no code implementations • 2 Sep 2021 • Aythami Morales, Julian Fierrez, Alejandro Acien, Ruben Tolosana, Ignacio Serna
This work presents a new deep learning approach for keystroke biometrics based on a novel Distance Metric Learning method (DML).
no code implementations • 17 Nov 2020 • Alejandro Peña, Ignacio Serna, Aythami Morales, Julian Fierrez, Agata Lapedriza
This work explores facial expression bias as a security vulnerability of face recognition systems.
no code implementations • 12 Sep 2020 • Alejandro Peña, Ignacio Serna, Aythami Morales, Julian Fierrez
With the aim of studying how current multimodal AI algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data, this demonstrator experiments over an automated recruitment testbed based on Curriculum Vitae: FairCVtest.
no code implementations • 22 Apr 2020 • Ignacio Serna, Aythami Morales, Julian Fierrez, Manuel Cebrian, Nick Obradovich, Iyad Rahwan
We propose a discrimination-aware learning method to improve both accuracy and fairness of biased face recognition algorithms.
1 code implementation • 15 Apr 2020 • Alejandro Peña, Ignacio Serna, Aythami Morales, Julian Fierrez
With the aim of studying how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data, we propose a fictitious automated recruitment testbed: FairCVtest.
no code implementations • 14 Apr 2020 • Ignacio Serna, Alejandro Peña, Aythami Morales, Julian Fierrez
We analyze how bias affects deep learning processes through a toy example using the MNIST database and a case study in gender detection from face images.
1 code implementation • 4 Dec 2019 • Ignacio Serna, Aythami Morales, Julian Fierrez, Manuel Cebrian, Nick Obradovich, Iyad Rahwan
We experimentally show that demographic groups highly represented in popular face databases have led to popular pre-trained deep face models presenting strong algorithmic discrimination.