Search Results for author: Hana Chockler

Found 26 papers, 4 papers with code

Causal Explanations for Image Classifiers

1 code implementation13 Nov 2024 Hana Chockler, David A. Kelly, Daniel Kroening, Youcheng Sun

However, none of the existing tools use a principled approach based on formal definitions of causes and explanations for the explanation extraction.

Test Where Decisions Matter: Importance-driven Testing for Deep Reinforcement Learning

no code implementations12 Nov 2024 Stefan Pranger, Hana Chockler, Martin Tappler, Bettina Könighofer

These estimates provide lower and upper bounds on the expected outcomes of the policy execution across all modeled states in the state space.

Deep Reinforcement Learning Reinforcement Learning (RL)

Real-Time Incremental Explanations for Object Detectors

no code implementations21 Aug 2024 Santiago Calderón-Peña, Hana Chockler, David A. Kelly

Existing black box explainability tools for object detectors rely on multiple calls to the model, which prevents them from computing explanations in real time.

Object

It's a Feature, Not a Bug: Measuring Creative Fluidity in Image Generators

no code implementations3 Jun 2024 Aditi Ramaswamy, Melane Navaratnarajah, Hana Chockler

With the rise of freely available image generators, AI-generated art has become the centre of a series of heated debates, one of which concerns the concept of human creativity.

Image Generation

Counterfactual Influence in Markov Decision Processes

no code implementations13 Feb 2024 Milad Kazemi, Jessica Lally, Ekaterina Tishchenko, Hana Chockler, Nicola Paoletti

Our work addresses a fundamental problem in the context of counterfactual inference for Markov Decision Processes (MDPs).

counterfactual Counterfactual Inference

Explaining Image Classifiers

no code implementations24 Jan 2024 Hana Chockler, Joseph Y. Halpern

We focus on explaining image classifiers, taking the work of Mothilal et al. [2021] (MMTS) as our point of departure.

MRxaI: Black-Box Explainability for Image Classifiers in a Medical Setting

no code implementations24 Nov 2023 Nathan Blake, Hana Chockler, David A. Kelly, Santiago Calderon Pena, Akchunya Chanchal

Existing tools for explaining the output of image classifiers can be divided into white-box, which rely on access to the model internals, and black-box, agnostic to the model.

You Only Explain Once

no code implementations23 Nov 2023 David A. Kelly, Hana Chockler, Daniel Kroening, Nathan Blake, Aditi Ramaswamy, Melane Navaratnarajah, Aaditya Shivakumar

In this paper, we propose a new black-box explainability algorithm and tool, YO-ReX, for efficient explanation of the outputs of object detectors.

Clustered Policy Decision Ranking

no code implementations21 Nov 2023 Mark Levin, Hana Chockler

Policies trained via reinforcement learning (RL) are often very complex even for simple tasks.

Fault localization Reinforcement Learning (RL)

Multiple Different Black Box Explanations for Image Classifiers

no code implementations25 Sep 2023 Hana Chockler, David A. Kelly, Daniel Kroening

Existing explanation tools for image classifiers usually give only a single explanation for an image's classification.

Causal Analysis of the TOPCAT Trial: Spironolactone for Preserved Cardiac Function Heart Failure

no code implementations23 Nov 2022 Francesca E. D. Raimondi, Tadhg O'Keeffe, Hana Chockler, Andrew R. Lawrence, Tamara Stemberga, Andre Franca, Maksim Sipos, Javed Butler, Shlomo Ben-Haim

We describe the results of applying causal discovery methods on the data from a multi-site clinical trial, on the Treatment of Preserved Cardiac Function Heart Failure with an Aldosterone Antagonist (TOPCAT).

Causal Discovery

Equality of Effort via Algorithmic Recourse

no code implementations21 Nov 2022 Francesca E. D. Raimondi, Andrew R. Lawrence, Hana Chockler

This paper proposes a method for measuring fairness through equality of effort by applying algorithmic recourse through minimal interventions.

counterfactual Fairness

A Causal Analysis of Harm

no code implementations11 Oct 2022 Sander Beckers, Hana Chockler, Joseph Y. Halpern

In this paper we formally define a qualitative notion of harm that uses causal models and is based on a well-known definition of actual causality (Halpern, 2016).

Philosophy

Quantifying Harm

no code implementations29 Sep 2022 Sander Beckers, Hana Chockler, Joseph Y. Halpern

In a companion paper (Beckers et al. 2022), we defined a qualitative notion of harm: either harm is caused, or it is not.

Domain Knowledge in A*-Based Causal Discovery

no code implementations17 Aug 2022 Steven Kleinegesse, Andrew R. Lawrence, Hana Chockler

Causal discovery has become a vital tool for scientists and practitioners wanting to discover causal relationships from observational data.

Causal Discovery

Vision Checklist: Towards Testable Error Analysis of Image Models to Help System Designers Interrogate Model Capabilities

no code implementations27 Jan 2022 Xin Du, Benedicte Legastelois, Bhargavi Ganesh, Ajitha Rajan, Hana Chockler, Vaishak Belle, Stuart Anderson, Subramanian Ramamoorthy

Robustness evaluations like our checklist will be crucial in future safety evaluations of visual perception modules, and be useful for a wide range of stakeholders including designers, deployers, and regulators involved in the certification of these systems.

Autonomous Driving

Causal policy ranking

no code implementations16 Nov 2021 Daniel McNamee, Hana Chockler

Policies trained via reinforcement learning (RL) are often very complex even for simple tasks.

counterfactual Counterfactual Reasoning +1

Transfer learning with causal counterfactual reasoning in Decision Transformers

no code implementations27 Oct 2021 Ayman Boustati, Hana Chockler, Daniel C. McNamee

In this study, we apply causal reasoning in the offline reinforcement learning setting to transfer a learned policy to new environments.

counterfactual Counterfactual Reasoning +4

Explanations for Occluded Images

no code implementations ICCV 2021 Hana Chockler, Daniel Kroening, Youcheng Sun

Existing algorithms for explaining the output of image classifiers perform poorly on inputs where the object of interest is partially occluded.

Safety Synthesis Sans Specification

no code implementations15 Nov 2020 Roderick Bloem, Hana Chockler, Masoud Ebrahimi, Dana Fisman, Heinz Riener

We define the problem of learning a transducer ${S}$ from a target language $U$ containing possibly conflicting transducers, using membership queries and conjecture queries.

Ranking Policy Decisions

2 code implementations NeurIPS 2021 Hadrien Pouget, Hana Chockler, Youcheng Sun, Daniel Kroening

Policies trained via Reinforcement Learning (RL) are often needlessly complex, making them difficult to analyse and interpret.

Atari Games Reinforcement Learning (RL)

Combining Experts' Causal Judgments

no code implementations20 May 2020 Dalal Alrajeh, Hana Chockler, Joseph Y. Halpern

We formally define the notion of an effective intervention, and then consider how experts' causal judgments can be combined in order to determine the most effective intervention.

Explaining Image Classifiers using Statistical Fault Localization

1 code implementation6 Aug 2019 Youcheng Sun, Hana Chockler, Xiaowei Huang, Daniel Kroening

The black-box nature of deep neural networks (DNNs) makes it impossible to understand why a particular output is produced, creating demand for "Explainable AI".

Fault localization

Causality and Responsibility for Formal Verification and Beyond

1 code implementation29 Aug 2016 Hana Chockler

The theory of actual causality, defined by Halpern and Pearl, and its quantitative measure - the degree of responsibility - was shown to be extremely useful in various areas of computer science due to a good match between the results it produces and our intuition.

Legal Reasoning

The Computational Complexity of Structure-Based Causality

no code implementations9 Dec 2014 Gadi Aleksandrowicz, Hana Chockler, Joseph Y. Halpern, Alexander Ivrii

Halpern and Pearl introduced a definition of actual causality; Eiter and Lukasiewicz showed that computing whether X=x is a cause of Y=y is NP-complete in binary models (where all variables can take on only two values) and\ Sigma_2^P-complete in general models.

Cannot find the paper you are looking for? You can Submit a new open access paper.