There has been a recent resurgence of interest in explainable artificial intelligence (XAI) that aims to reduce the opaqueness of AI-based decision-making systems, allowing humans to scrutinize and trust them.
Causal inference is at the heart of empirical research in natural and social sciences and is critical for scientific discovery and informed decision making.
However, it is the underlying data on which these systems are trained that often reflect discrimination, suggesting a database repair problem.
The study of causality or causal inference - how much a given treatment causally affects a given outcome in a population - goes way beyond correlation or association analysis of variables, and is critical in making sound data driven decisions and policies in a multitude of applications.
In this work we establish precise connections between QA-causality and both abductive diagnosis and the view-update problem in databases, allowing us to obtain new algorithmic and complexity results for QA-causality.
In this work we further investigate connections between query-answer causality and abductive diagnosis and the view-update problem.