1 code implementation • 13 Nov 2024 • Hana Chockler, David A. Kelly, Daniel Kroening, Youcheng Sun
However, none of the existing tools use a principled approach based on formal definitions of causes and explanations for the explanation extraction.
no code implementations • 12 Nov 2024 • Stefan Pranger, Hana Chockler, Martin Tappler, Bettina Könighofer
These estimates provide lower and upper bounds on the expected outcomes of the policy execution across all modeled states in the state space.
no code implementations • 21 Aug 2024 • Santiago Calderón-Peña, Hana Chockler, David A. Kelly
Existing black box explainability tools for object detectors rely on multiple calls to the model, which prevents them from computing explanations in real time.
no code implementations • 3 Jun 2024 • Aditi Ramaswamy, Melane Navaratnarajah, Hana Chockler
With the rise of freely available image generators, AI-generated art has become the centre of a series of heated debates, one of which concerns the concept of human creativity.
no code implementations • 13 Feb 2024 • Milad Kazemi, Jessica Lally, Ekaterina Tishchenko, Hana Chockler, Nicola Paoletti
Our work addresses a fundamental problem in the context of counterfactual inference for Markov Decision Processes (MDPs).
no code implementations • 24 Jan 2024 • Hana Chockler, Joseph Y. Halpern
We focus on explaining image classifiers, taking the work of Mothilal et al. [2021] (MMTS) as our point of departure.
no code implementations • 24 Nov 2023 • Nathan Blake, Hana Chockler, David A. Kelly, Santiago Calderon Pena, Akchunya Chanchal
Existing tools for explaining the output of image classifiers can be divided into white-box, which rely on access to the model internals, and black-box, agnostic to the model.
no code implementations • 23 Nov 2023 • David A. Kelly, Hana Chockler, Daniel Kroening, Nathan Blake, Aditi Ramaswamy, Melane Navaratnarajah, Aaditya Shivakumar
In this paper, we propose a new black-box explainability algorithm and tool, YO-ReX, for efficient explanation of the outputs of object detectors.
no code implementations • 21 Nov 2023 • Mark Levin, Hana Chockler
Policies trained via reinforcement learning (RL) are often very complex even for simple tasks.
no code implementations • 25 Sep 2023 • Hana Chockler, David A. Kelly, Daniel Kroening
Existing explanation tools for image classifiers usually give only a single explanation for an image's classification.
no code implementations • 23 Nov 2022 • Francesca E. D. Raimondi, Tadhg O'Keeffe, Hana Chockler, Andrew R. Lawrence, Tamara Stemberga, Andre Franca, Maksim Sipos, Javed Butler, Shlomo Ben-Haim
We describe the results of applying causal discovery methods on the data from a multi-site clinical trial, on the Treatment of Preserved Cardiac Function Heart Failure with an Aldosterone Antagonist (TOPCAT).
no code implementations • 21 Nov 2022 • Francesca E. D. Raimondi, Andrew R. Lawrence, Hana Chockler
This paper proposes a method for measuring fairness through equality of effort by applying algorithmic recourse through minimal interventions.
no code implementations • 11 Oct 2022 • Sander Beckers, Hana Chockler, Joseph Y. Halpern
In this paper we formally define a qualitative notion of harm that uses causal models and is based on a well-known definition of actual causality (Halpern, 2016).
no code implementations • 29 Sep 2022 • Sander Beckers, Hana Chockler, Joseph Y. Halpern
In a companion paper (Beckers et al. 2022), we defined a qualitative notion of harm: either harm is caused, or it is not.
no code implementations • 17 Aug 2022 • Steven Kleinegesse, Andrew R. Lawrence, Hana Chockler
Causal discovery has become a vital tool for scientists and practitioners wanting to discover causal relationships from observational data.
no code implementations • 13 Aug 2022 • Stefanos Ioannou, Hana Chockler, Alexander Hammers, Andrew P. King
We find significant sex and race bias effects in segmentation model performance.
no code implementations • 27 Jan 2022 • Xin Du, Benedicte Legastelois, Bhargavi Ganesh, Ajitha Rajan, Hana Chockler, Vaishak Belle, Stuart Anderson, Subramanian Ramamoorthy
Robustness evaluations like our checklist will be crucial in future safety evaluations of visual perception modules, and be useful for a wide range of stakeholders including designers, deployers, and regulators involved in the certification of these systems.
no code implementations • 16 Nov 2021 • Daniel McNamee, Hana Chockler
Policies trained via reinforcement learning (RL) are often very complex even for simple tasks.
no code implementations • 27 Oct 2021 • Ayman Boustati, Hana Chockler, Daniel C. McNamee
In this study, we apply causal reasoning in the offline reinforcement learning setting to transfer a learned policy to new environments.
no code implementations • ICCV 2021 • Hana Chockler, Daniel Kroening, Youcheng Sun
Existing algorithms for explaining the output of image classifiers perform poorly on inputs where the object of interest is partially occluded.
no code implementations • 15 Nov 2020 • Roderick Bloem, Hana Chockler, Masoud Ebrahimi, Dana Fisman, Heinz Riener
We define the problem of learning a transducer ${S}$ from a target language $U$ containing possibly conflicting transducers, using membership queries and conjecture queries.
2 code implementations • NeurIPS 2021 • Hadrien Pouget, Hana Chockler, Youcheng Sun, Daniel Kroening
Policies trained via Reinforcement Learning (RL) are often needlessly complex, making them difficult to analyse and interpret.
no code implementations • 20 May 2020 • Dalal Alrajeh, Hana Chockler, Joseph Y. Halpern
We formally define the notion of an effective intervention, and then consider how experts' causal judgments can be combined in order to determine the most effective intervention.
1 code implementation • 6 Aug 2019 • Youcheng Sun, Hana Chockler, Xiaowei Huang, Daniel Kroening
The black-box nature of deep neural networks (DNNs) makes it impossible to understand why a particular output is produced, creating demand for "Explainable AI".
1 code implementation • 29 Aug 2016 • Hana Chockler
The theory of actual causality, defined by Halpern and Pearl, and its quantitative measure - the degree of responsibility - was shown to be extremely useful in various areas of computer science due to a good match between the results it produces and our intuition.
no code implementations • 9 Dec 2014 • Gadi Aleksandrowicz, Hana Chockler, Joseph Y. Halpern, Alexander Ivrii
Halpern and Pearl introduced a definition of actual causality; Eiter and Lukasiewicz showed that computing whether X=x is a cause of Y=y is NP-complete in binary models (where all variables can take on only two values) and\ Sigma_2^P-complete in general models.