no code implementations • 11 Mar 2024 • Balint Gyevnar, Stephanie Droop, Tadeg Quillien
A hallmark of a good XAI system is explanations that users can understand and act on.
no code implementations • 8 Feb 2024 • Anton Kuznietsov, Balint Gyevnar, Cheng Wang, Steven Peters, Stefano V. Albrecht
One way to mitigate this challenge is to utilize explainable AI (XAI) techniques.
1 code implementation • 21 Feb 2023 • Balint Gyevnar, Cheng Wang, Christopher G. Lucas, Shay B. Cohen, Stefano V. Albrecht
We present CEMA: Causal Explanations in Multi-Agent systems; a framework for creating causal natural language explanations of an agent's decisions in dynamic sequential multi-agent systems to build more trustworthy autonomous agents.
no code implementations • 21 Feb 2023 • Balint Gyevnar, Nick Ferguson, Burkhard Schafer
To begin to bridge this gap, we overview and clarify the terminology of how XAI and European regulation -- the Act and the related General Data Protection Regulation (GDPR) -- view basic definitions of transparency.
3 code implementations • 2 Aug 2022 • Ibrahim H. Ahmed, Cillian Brewitt, Ignacio Carlucho, Filippos Christianos, Mhairi Dunion, Elliot Fosong, Samuel Garcin, Shangmin Guo, Balint Gyevnar, Trevor McInroe, Georgios Papoudakis, Arrasy Rahman, Lukas Schäfer, Massimiliano Tamborski, Giuseppe Vecchio, Cheng Wang, Stefano V. Albrecht
The development of autonomous agents which can interact with other agents to accomplish a given task is a core area of research in artificial intelligence and machine learning.
2 code implementations • 10 Mar 2021 • Cillian Brewitt, Balint Gyevnar, Stefano V. Albrecht
As autonomous driving is safety-critical, it is important to have methods which are human interpretable and for which safety can be formally verified.
Autonomous Driving Robotics Multiagent Systems