no code implementations • 12 Jun 2023 • Alejandro Peña, Aythami Morales, Julian Fierrez, Javier Ortega-Garcia, Marcos Grande, Iñigo Puente, Jorge Cordova, Gonzalo Cordova
Every day, thousands of digital documents are generated with useful information for companies, public organizations, and citizens.
no code implementations • 5 Jun 2023 • Alejandro Peña, Aythami Morales, Julian Fierrez, Ignacio Serna, Javier Ortega-Garcia, Iñigo Puente, Jorge Cordova, Gonzalo Cordova
The analysis of public affairs documents is crucial for citizens as it promotes transparency, accountability, and informed decision-making.
1 code implementation • 13 Feb 2023 • Alejandro Peña, Ignacio Serna, Aythami Morales, Julian Fierrez, Alfonso Ortega, Ainhoa Herrarte, Manuel Alcantara, Javier Ortega-Garcia
With the aim of studying how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data, we propose a fictitious case study focused on automated recruitment: FairCVtest.
no code implementations • 17 Nov 2020 • Alejandro Peña, Ignacio Serna, Aythami Morales, Julian Fierrez, Agata Lapedriza
This work explores facial expression bias as a security vulnerability of face recognition systems.
no code implementations • 18 Sep 2020 • Alejandro Peña, Julian Fierrez, Agata Lapedriza, Aythami Morales
We propose two face representations that are blind to facial expressions associated to emotional responses.
no code implementations • 12 Sep 2020 • Alejandro Peña, Ignacio Serna, Aythami Morales, Julian Fierrez
With the aim of studying how current multimodal AI algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data, this demonstrator experiments over an automated recruitment testbed based on Curriculum Vitae: FairCVtest.
1 code implementation • 15 Apr 2020 • Alejandro Peña, Ignacio Serna, Aythami Morales, Julian Fierrez
With the aim of studying how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data, we propose a fictitious automated recruitment testbed: FairCVtest.
no code implementations • 14 Apr 2020 • Ignacio Serna, Alejandro Peña, Aythami Morales, Julian Fierrez
We analyze how bias affects deep learning processes through a toy example using the MNIST database and a case study in gender detection from face images.