Search Results for author: Tobias Huber

Found 9 papers, 5 papers with code

GANterfactual-RL: Understanding Reinforcement Learning Agents' Strategies through Visual Counterfactual Explanations

1 code implementation24 Feb 2023 Tobias Huber, Maximilian Demmler, Silvan Mertes, Matthew L. Olson, Elisabeth André

However, research focusing on counterfactual explanations, specifically for RL agents with visual input, is scarce and does not go beyond identifying defective agents.

counterfactual Decision Making +2

Integrating Policy Summaries with Reward Decomposition for Explaining Reinforcement Learning Agents

no code implementations21 Oct 2022 Yael Septon, Tobias Huber, Elisabeth André, Ofra Amir

Methods that help users understand the behavior of such agents can roughly be divided into local explanations that analyze specific decisions of the agents and global explanations that convey the general strategy of the agents.

Decision Making reinforcement-learning +1

Alterfactual Explanations -- The Relevance of Irrelevance for Explaining AI Systems

no code implementations19 Jul 2022 Silvan Mertes, Christina Karle, Tobias Huber, Katharina Weitz, Ruben Schlagowski, Elisabeth André

We evaluate our approach in an extensive user study, revealing that it is able to significantly contribute to the participants' understanding of an AI.

counterfactual Counterfactual Explanation +2

Dynamic Difficulty Adjustment in Virtual Reality Exergames through Experience-driven Procedural Content Generation

no code implementations19 Aug 2021 Tobias Huber, Silvan Mertes, Stanislava Rangelova, Simon Flutura, Elisabeth André

As a proof-of-concept, we implement an initial prototype in which the player must traverse a maze that includes several exercise rooms, whereby the generation of the maze is realized by a neural network.

Benchmarking Perturbation-based Saliency Maps for Explaining Atari Agents

1 code implementation18 Jan 2021 Tobias Huber, Benedikt Limmer, Elisabeth André

One of the most prominent methods for explaining the behavior of Deep Reinforcement Learning (DRL) agents is the generation of saliency maps that show how much each pixel attributed to the agents' decision.

Atari Games Benchmarking +2

Local and Global Explanations of Agent Behavior: Integrating Strategy Summaries with Saliency Maps

1 code implementation18 May 2020 Tobias Huber, Katharina Weitz, Elisabeth André, Ofra Amir

Specifically, we augment strategy summaries that extract important trajectories of states from simulations of the agent with saliency maps which show what information the agent attends to.

Atari Games Decision Making +3

Are Bitcoin Bubbles Predictable? Combining a Generalized Metcalfe's Law and the LPPLS Model

1 code implementation15 Mar 2018 Spencer Wheatley, Didier Sornette, Tobias Huber, Max Reppen, Robert N. Gantner

We develop a strong diagnostic for bubbles and crashes in bitcoin, by analyzing the coincidence (and its absence) of fundamental and technical indicators.

Cannot find the paper you are looking for? You can Submit a new open access paper.