no code implementations • 2 May 2023 • Natalia Díaz-Rodríguez, Javier Del Ser, Mark Coeckelbergh, Marcos López de Prado, Enrique Herrera-Viedma, Francisco Herrera
Trustworthy Artificial Intelligence (AI) is based on seven technical requirements sustained over three main pillars that should be met throughout the system's entire life cycle: it should be (1) lawful, (2) ethical, and (3) robust, both from a technical and a social perspective.
1 code implementation • 6 Dec 2022 • Giorgio Angelotti, Natalia Díaz-Rodríguez
A quantitative assessment of the global importance of an agent in a team is as valuable as gold for strategists, decision-makers, and sports coaches.
1 code implementation • 20 May 2022 • Javier Del Ser, Alejandro Barredo-Arrieta, Natalia Díaz-Rodríguez, Francisco Herrera, Andreas Holzinger
To this end, we present a novel framework for the generation of counterfactual examples which formulates its goal as a multi-objective optimization problem balancing three different objectives: 1) plausibility, i. e., the likeliness of the counterfactual of being possible as per the distribution of the input data; 2) intensity of the changes to the original input; and 3) adversarial power, namely, the variability of the model's output induced by the counterfactual.
1 code implementation • 21 Feb 2022 • Fernando Amodeo, Fernando Caballero, Natalia Díaz-Rodríguez, Luis Merino
Scene graph generation from images is a task of great interest to applications such as robotics, because graphs are the main way to represent knowledge about the world and regulate human-robot interactions in tasks such as Visual Question Answering (VQA).
no code implementations • 13 Nov 2021 • Adrien Bennetot, Ivan Donadello, Ayoub El Qadi, Mauro Dragoni, Thomas Frossard, Benedikt Wagner, Anna Saranti, Silvia Tulli, Maria Trocan, Raja Chatila, Andreas Holzinger, Artur d'Avila Garcez, Natalia Díaz-Rodríguez
Last years have been characterized by an upsurge of opaque automatic decision support systems, such as Deep Neural Networks (DNNs).
BIG-bench Machine Learning
Explainable artificial intelligence
+2
1 code implementation • 4 Oct 2021 • Alexandre Heuillet, Fabien Couthouis, Natalia Díaz-Rodríguez
This study proposes a novel approach to explain cooperative strategies in multiagent RL using Shapley values, a game theory concept used in XAI that successfully explains the rationale behind decisions taken by Machine Learning algorithms.
no code implementations • 17 Sep 2021 • Zhaorun Chen, Liang Gong, Te Sun, Binhao Chen, Shenghan Xie, David Filliat, Natalia Díaz-Rodríguez
While the rapid progress of deep learning fuels end-to-end reinforcement learning (RL), direct application, especially in high-dimensional space like robotic scenarios still suffers from high sample efficiency.
no code implementations • 29 Apr 2021 • Natalia Díaz-Rodríguez, Rūta Binkytė-Sadauskienė, Wafae Bakkali, Sannidhi Bookseller, Paola Tubaro, Andrius Bacevicius, Raja Chatila
The COVID-19 pandemic has spurred a large amount of observational studies reporting linkages between the risk of developing severe COVID-19 or dying from it, and sex and gender.
2 code implementations • 24 Apr 2021 • Natalia Díaz-Rodríguez, Alberto Lamas, Jules Sanchez, Gianni Franchi, Ivan Donadello, Siham Tabik, David Filliat, Policarpo Cruz, Rosana Montes, Francisco Herrera
We tackle such problem by considering the symbolic knowledge is expressed in form of a domain expert knowledge graph.
no code implementations • 10 Apr 2021 • Björn Lütjens, Brandon Leshchinskiy, Christian Requena-Mesa, Farrukh Chishtie, Natalia Díaz-Rodríguez, Océane Boulais, Aruna Sankaranarayanan, Margaux Masson-Forsythe, Aaron Piña, Yarin Gal, Chedy Raïssi, Alexander Lavin, Dava Newman
Our work aims to enable more visual communication of large-scale climate impacts via visualizing the output of coastal flood models as satellite imagery.
no code implementations • 2 Apr 2021 • Thomas Rojat, Raphaël Puget, David Filliat, Javier Del Ser, Rodolphe Gelin, Natalia Díaz-Rodríguez
Most of state of the art methods applied on time series consist of deep learning methods that are too complex to be interpreted.
no code implementations • 15 Aug 2020 • Alexandre Heuillet, Fabien Couthouis, Natalia Díaz-Rodríguez
A large set of the explainable Artificial Intelligence (XAI) literature is emerging on feature relevance techniques to explain a deep neural network (DNN) output or explaining models that ingest image source data.
Explainable artificial intelligence
Explainable Artificial Intelligence (XAI)
+2
no code implementations • 25 May 2020 • Adrien Bennetot, Vicky Charisi, Natalia Díaz-Rodríguez
Transferring as fast as possible the functioning of our brain to artificial intelligence is an ambitious goal that would help advance the state of the art in AI and robotics.
no code implementations • 13 May 2020 • Stephane Doncieux, Nicolas Bredeche, Léni Le Goff, Benoît Girard, Alexandre Coninx, Olivier Sigaud, Mehdi Khamassi, Natalia Díaz-Rodríguez, David Filliat, Timothy Hospedales, A. Eiben, Richard Duro
Robots are still limited to controlled conditions, that the robot designer knows with enough details to endow the robot with the appropriate models or behaviors.
2 code implementations • 26 Mar 2020 • Pranav Agarwal, Alejandro Betancourt, Vana Panagiotou, Natalia Díaz-Rodríguez
In this paper, we attempt to show the biased nature of the currently existing image captioning models and present a new image captioning dataset, Egoshots, consisting of 978 real life images with no captions.
1 code implementation • 22 Oct 2019 • Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador García, Sergio Gil-López, Daniel Molina, Richard Benjamins, Raja Chatila, Francisco Herrera
In the last years, Artificial Intelligence (AI) has achieved a notable momentum that may deliver the best of expectations over many application sectors across the field.
Explainable artificial intelligence
Explainable Artificial Intelligence (XAI)
+1
no code implementations • 19 Sep 2019 • Adrien Bennetot, Jean-Luc Laurent, Raja Chatila, Natalia Díaz-Rodríguez
Many high-performance models suffer from a lack of interpretability.
Explainable artificial intelligence
Explainable Artificial Intelligence (XAI)
+2
no code implementations • 11 Jul 2019 • René Traoré, Hugo Caselles-Dupré, Timothée Lesort, Te Sun, Guanghang Cai, Natalia Díaz-Rodríguez, David Filliat
In multi-task reinforcement learning there are two main challenges: at training time, the ability to learn different policies with a single model; at test time, inferring which of those policies applying without an external signal.
no code implementations • 29 Jun 2019 • Timothée Lesort, Vincenzo Lomonaco, Andrei Stoian, Davide Maltoni, David Filliat, Natalia Díaz-Rodríguez
An important challenge for machine learning is not necessarily finding solutions that work in the real world but rather finding stable algorithms that can learn in real world.
no code implementations • 11 Jun 2019 • René Traoré, Hugo Caselles-Dupré, Timothée Lesort, Te Sun, Natalia Díaz-Rodríguez, David Filliat
We focus on the problem of teaching a robot to solve tasks presented sequentially, i. e., in a continual learning scenario.
5 code implementations • 24 Jan 2019 • Antonin Raffin, Ashley Hill, René Traoré, Timothée Lesort, Natalia Díaz-Rodríguez, David Filliat
Scaling end-to-end reinforcement learning to control real robots from vision presents a series of challenges, in particular in terms of sample efficiency.
no code implementations • 13 Nov 2018 • Vincenzo Lomonaco, Angelo Trotta, Marta Ziosi, Juan de Dios Yáñez Ávila, Natalia Díaz-Rodríguez
In recent years, a rising numbers of people arrived in the European Union, traveling across the Mediterranean Sea or overland through Southeast Europe in what has been later named as the European migrant crisis.
no code implementations • 31 Oct 2018 • Natalia Díaz-Rodríguez, Vincenzo Lomonaco, David Filliat, Davide Maltoni
Continual learning consists of algorithms that learn from a stream of data/tasks continuously and adaptively thought time, enabling the incremental development of ever more complex knowledge and skills.
5 code implementations • 25 Sep 2018 • Antonin Raffin, Ashley Hill, René Traoré, Timothée Lesort, Natalia Díaz-Rodríguez, David Filliat
State representation learning aims at learning compact representations from raw observations in robotics and control applications.
1 code implementation • 12 Feb 2018 • Timothée Lesort, Natalia Díaz-Rodríguez, Jean-François Goudou, David Filliat
State representation learning (SRL) focuses on a particular kind of representation learning where learned features are in low dimension, evolve through time, and are influenced by actions of an agent.
1 code implementation • 30 Mar 2016 • Alejandro Betancourt, Natalia Díaz-Rodríguez, Emilia Barakova, Lucio Marcenaro, Matthias Rauterberg, Carlo Regazzoni
Wearable cameras stand out as one of the most promising devices for the upcoming years, and as a consequence, the demand of computer algorithms to automatically understand the videos recorded with them is increasing quickly.