no code implementations • 10 Apr 2024 • Haiying Huang, Adnan Darwiche
The unit selection problem aims to find objects, called units, that optimize a causal objective function which describes the objects' behavior in a causal context (e. g., selecting customers who are about to churn but would most likely change their mind if encouraged).
no code implementations • 7 Mar 2024 • Yizuo Chen, Adnan Darwiche
We study the identification of causal effects, motivated by two improvements to identifiability which can be attained if one knows that some variables in a causal graph are functionally determined by their parents (without needing to know the specific functions).
1 code implementation • 5 Oct 2023 • David Huber, Yizuo Chen, Alessandro Antonucci, Adnan Darwiche, Marco Zaffalon
We discuss the problem of bounding partially identifiable queries, such as counterfactuals, in Pearlian structural causal models.
no code implementations • 9 May 2023 • Adnan Darwiche
We discuss in this tutorial a comprehensive, semantical and computational theory of explainability along these dimensions which is based on some recent developments in symbolic logic.
no code implementations • 28 Apr 2023 • Chunxi Ji, Adnan Darwiche
We show that these explanations can be significantly improved in the presence of non-binary features, leading to a new class of explanations that relay more information about decisions and the underlying classifiers.
no code implementations • 28 Feb 2023 • Haiying Huang, Adnan Darwiche
The unit selection problem aims to identify objects, called units, that are most likely to exhibit a desired mode of behavior when subjected to stimuli (e. g., customers who are about to churn but would change their mind if encouraged).
no code implementations • 24 Nov 2022 • Yunqiu Han, Yizuo Chen, Adnan Darwiche
We show that counterfactual reasoning is no harder than associational or interventional reasoning on fully specified SCMs in the context of two computational frameworks.
no code implementations • 20 Mar 2022 • Adnan Darwiche, Chunxi Ji
In this paper, we refer to the prime implicates of a complete reason as necessary reasons for the decision.
no code implementations • 7 Feb 2022 • Adnan Darwiche
One can compile a non-parametric causal graph into an arithmetic circuit that supports inference in time linear in the circuit size.
no code implementations • 7 Feb 2022 • Adnan Darwiche
Tractable Boolean and arithmetic circuits have been studied extensively in AI for over two decades now.
no code implementations • 23 Aug 2021 • Adnan Darwiche, Pierre Marquis
This leads to a refinement of quantified Boolean logic with literal quantification as its primitive.
no code implementations • 3 Jul 2020 • Arthur Choi, Andy Shih, Anchal Goyanka, Adnan Darwiche
Recent work has shown that the input-output behavior of some machine learning systems can be captured symbolically using Boolean expressions or tractable Boolean circuits, which facilitates reasoning about the behavior of these systems.
no code implementations • 12 Jun 2020 • Yujia Shen, Arthur Choi, Adnan Darwiche
We propose to first learn a functional and parameterized representation of a conditional probability table (CPT), such as a neural network.
no code implementations • 18 Apr 2020 • Adnan Darwiche
We consider three modern roles for logic in artificial intelligence, which are based on the theory of tractable Boolean circuits: (1) logic as a basis for computation, (2) logic for learning from a combination of data and knowledge, and (3) logic for reasoning about the behavior of machine learning systems.
no code implementations • 5 Apr 2020 • Weijia Shi, Andy Shih, Adnan Darwiche, Arthur Choi
We consider the compilation of a binary neural network's decision function into tractable representations such as Ordered Binary Decision Diagrams (OBDDs) and Sentential Decision Diagrams (SDDs).
no code implementations • 21 Feb 2020 • Adnan Darwiche, Auguste Hirth
We present a theory for unveiling the reasons behind the decisions made by Boolean classifiers and study some of its theoretical and practical implications.
no code implementations • 21 Feb 2020 • Adnan Darwiche
We present new results on the classical algorithm of variable elimination, which underlies many algorithms including for probabilistic inference.
no code implementations • 21 Dec 2018 • Arthur Choi, Ruocheng Wang, Adnan Darwiche
A neural network computes a function.
no code implementations • 9 May 2018 • Andy Shih, Arthur Choi, Adnan Darwiche
We propose an approach for explaining Bayesian network classifiers, which is based on compiling such classifiers into decision functions that have a tractable and symbolic form.
no code implementations • NeurIPS 2017 • Arthur Choi, Yujia Shen, Adnan Darwiche
Recently, the Probabilistic Sentential Decision Diagram (PSDD) has been proposed as a framework for systematically inducing and learning distributions over structured objects, including combinatorial objects such as permutations and rankings, paths and matchings on a graph, etc.
no code implementations • 20 Sep 2017 • Umut Oztok, Adnan Darwiche
On the theoretical side, we show that the new method could generate exponentially smaller DNNFs than deterministic ones, even by adding a single auxiliary variable.
no code implementations • ICML 2017 • Arthur Choi, Adnan Darwiche
The past decade has seen a significant interest in learning tractable probabilistic representations.
no code implementations • 13 Jul 2017 • Adnan Darwiche
The vision systems of the eagle and the snake outperform everything that we can make in the laboratory, but snakes and eagles cannot build an eyeglass or a telescope or a microscope.
no code implementations • NeurIPS 2016 • Eunice Yuh-Jie Chen, Yujia Shen, Arthur Choi, Adnan Darwiche
Our approach is based on a recently proposed framework for optimal structure learning based on non-decomposable scores, which is general enough to accommodate ancestral constraints.
no code implementations • NeurIPS 2016 • Yujia Shen, Arthur Choi, Adnan Darwiche
We consider tractable representations of probability distributions and the polytime operations they support.
no code implementations • NeurIPS 2015 • Jessa Bekker, Jesse Davis, Arthur Choi, Adnan Darwiche, Guy Van Den Broeck
We propose a tractable learner that guarantees efficient inference for a broader class of queries.
no code implementations • 5 Apr 2015 • Arthur Choi, Adnan Darwiche
Relax, Compensate and then Recover (RCR) is a paradigm for approximate inference in probabilistic graphical models that has previously provided theoretical and practical insights on iterative belief propagation and some of its generalizations.
no code implementations • NeurIPS 2014 • Khaled S. Refaat, Arthur Choi, Adnan Darwiche
We propose a technique for decomposing the parameter learning problem in Bayesian networks into independent learning problems.
no code implementations • 7 Aug 2014 • Adnan Darwiche, Gregory M. Provan
We describe a new paradigm for implementing inference in belief networks, which relies on compiling a belief network into an arithmetic expression called a Query DAG (Q-DAG).
no code implementations • 7 Aug 2014 • Hei Chan, Adnan Darwiche
Common wisdom has it that small distinctions in the probabilities quantifying a Bayesian network do not matter much for the resultsof probabilistic queries.
no code implementations • 15 Apr 2014 • Guy Van den Broeck, Adnan Darwiche
We consider the problem of bottom-up compilation of knowledge bases, which is usually predicated on the existence of a polytime function for combining compilations using Boolean operators (usually called an Apply function).
no code implementations • 19 Dec 2013 • Guy Van den Broeck, Wannes Meert, Adnan Darwiche
First-order model counting emerged recently as a novel reasoning task, at the core of efficient algorithms for probabilistic logics.
no code implementations • NeurIPS 2013 • Khaled S. Refaat, Arthur Choi, Adnan Darwiche
Second, it facilitates the design of EDML algorithms for new graphical models, leading to a new algorithm for learning parameters in Markov networks.
no code implementations • NeurIPS 2013 • Guy Van den Broeck, Adnan Darwiche
Recent theoretical results show, for example, that conditioning on evidence which corresponds to binary relations is #P-hard, suggesting that no lifting is to be expected in the worst case.
no code implementations • 19 Jan 2013 • Adnan Darwiche, Nir Friedman
This is the Proceedings of the Eighteenth Conference on Uncertainty in Artificial Intelligence, which was held in Alberta, Canada, August 1-4 2002
no code implementations • NeurIPS 2009 • Arthur Choi, Adnan Darwiche
We identify a second approach to compensation that is based on a more refined idealized case, resulting in a new approximation with distinct properties.