Search Results for author: Balint Gyevnar

Found 6 papers, 4 papers with code

People Attribute Purpose to Autonomous Vehicles When Explaining Their Behavior

1 code implementation11 Mar 2024 Balint Gyevnar, Stephanie Droop, Tadeg Quillien, Shay B. Cohen, Neil R. Bramley, Christopher G. Lucas, Stefano V. Albrecht

Cognitive science can help us understand which explanations people might expect, and in which format they frame these explanations, whether causal, counterfactual, or teleological (i. e., purpose-oriented).

Attribute Autonomous Driving +2

Causal Explanations for Sequential Decision-Making in Multi-Agent Systems

1 code implementation21 Feb 2023 Balint Gyevnar, Cheng Wang, Christopher G. Lucas, Shay B. Cohen, Stefano V. Albrecht

We present CEMA: Causal Explanations in Multi-Agent systems; a framework for creating causal natural language explanations of an agent's decisions in dynamic sequential multi-agent systems to build more trustworthy autonomous agents.

Autonomous Driving counterfactual +2

Bridging the Transparency Gap: What Can Explainable AI Learn From the AI Act?

no code implementations21 Feb 2023 Balint Gyevnar, Nick Ferguson, Burkhard Schafer

To begin to bridge this gap, we overview and clarify the terminology of how XAI and European regulation -- the Act and the related General Data Protection Regulation (GDPR) -- view basic definitions of transparency.

Explainable Artificial Intelligence (XAI)

GRIT: Fast, Interpretable, and Verifiable Goal Recognition with Learned Decision Trees for Autonomous Driving

2 code implementations10 Mar 2021 Cillian Brewitt, Balint Gyevnar, Stefano V. Albrecht

As autonomous driving is safety-critical, it is important to have methods which are human interpretable and for which safety can be formally verified.

Autonomous Driving Robotics Multiagent Systems

Cannot find the paper you are looking for? You can Submit a new open access paper.