1 code implementation • 9 Sep 2021 • Ignacio Serna, Daniel DeAlcala, Aythami Morales, Julian Fierrez, Javier Ortega-Garcia
This paper is the first to explore an automatic way to detect bias in deep convolutional neural networks by simply looking at their weights.
no code implementations • 16 Dec 2021 • Roberto Daza, Daniel DeAlcala, Aythami Morales, Ruben Tolosana, Ruth Cobos, Julian Fierrez
The experimental framework is carried out using a public multimodal database for eye blink detection and attention level estimation called mEBAL, which comprises data from 38 students and multiples acquisition sensors, in particular, i) an electroencephalogram (EEG) band which provides the time signals coming from the student's cognitive information, and ii) RGB and NIR cameras to capture the students face gestures.
no code implementations • 27 Jul 2022 • Daniel DeAlcala, Aythami Morales, Ruben Tolosana, Alejandro Acien, Julian Fierrez, Santiago Hernandez, Miguel A. Ferrer, Moises Diaz
Different bot detectors are considered based on several supervised classifiers (Support Vector Machine, Random Forest, Gaussian Naive Bayes and a Long Short-Term Memory network) and a learning framework including human and synthetic samples.
no code implementations • 26 Apr 2023 • Daniel DeAlcala, Ignacio Serna, Aythami Morales, Julian Fierrez, Javier Ortega-Garcia
This risk assessment should address, among others, the detection and mitigation of bias in AI.
no code implementations • 14 Feb 2024 • Daniel DeAlcala, Aythami Morales, Gonzalo Mancera, Julian Fierrez, Ruben Tolosana, Javier Ortega-Garcia
This paper introduces the Membership Inference Test (MINT), a novel approach that aims to empirically assess if specific data was used during the training of Artificial Intelligence (AI) models.