Search Results for author: Arthur Choi

Found 15 papers, 0 papers with code

On Symbolically Encoding the Behavior of Random Forests

no code implementations3 Jul 2020 Arthur Choi, Andy Shih, Anchal Goyanka, Adnan Darwiche

Recent work has shown that the input-output behavior of some machine learning systems can be captured symbolically using Boolean expressions or tractable Boolean circuits, which facilitates reasoning about the behavior of these systems.

BIG-bench Machine Learning

A New Perspective on Learning Context-Specific Independence

no code implementations12 Jun 2020 Yujia Shen, Arthur Choi, Adnan Darwiche

We propose to first learn a functional and parameterized representation of a conditional probability table (CPT), such as a neural network.

On Tractable Representations of Binary Neural Networks

no code implementations5 Apr 2020 Weijia Shi, Andy Shih, Adnan Darwiche, Arthur Choi

We consider the compilation of a binary neural network's decision function into tractable representations such as Ordered Binary Decision Diagrams (OBDDs) and Sentential Decision Diagrams (SDDs).

A Symbolic Approach to Explaining Bayesian Network Classifiers

no code implementations9 May 2018 Andy Shih, Arthur Choi, Adnan Darwiche

We propose an approach for explaining Bayesian network classifiers, which is based on compiling such classifiers into decision functions that have a tractable and symbolic form.

General Classification

Tractability in Structured Probability Spaces

no code implementations NeurIPS 2017 Arthur Choi, Yujia Shen, Adnan Darwiche

Recently, the Probabilistic Sentential Decision Diagram (PSDD) has been proposed as a framework for systematically inducing and learning distributions over structured objects, including combinatorial objects such as permutations and rankings, paths and matchings on a graph, etc.

On Relaxing Determinism in Arithmetic Circuits

no code implementations ICML 2017 Arthur Choi, Adnan Darwiche

The past decade has seen a significant interest in learning tractable probabilistic representations.

Tractable Operations for Arithmetic Circuits of Probabilistic Models

no code implementations NeurIPS 2016 Yujia Shen, Arthur Choi, Adnan Darwiche

We consider tractable representations of probability distributions and the polytime operations they support.

Learning Bayesian networks with ancestral constraints

no code implementations NeurIPS 2016 Eunice Yuh-Jie Chen, Yujia Shen, Arthur Choi, Adnan Darwiche

Our approach is based on a recently proposed framework for optimal structure learning based on non-decomposable scores, which is general enough to accommodate ancestral constraints.

Tractable Learning for Complex Probability Queries

no code implementations NeurIPS 2015 Jessa Bekker, Jesse Davis, Arthur Choi, Adnan Darwiche, Guy Van Den Broeck

We propose a tractable learner that guarantees efficient inference for a broader class of queries.

Dual Decomposition from the Perspective of Relax, Compensate and then Recover

no code implementations5 Apr 2015 Arthur Choi, Adnan Darwiche

Relax, Compensate and then Recover (RCR) is a paradigm for approximate inference in probabilistic graphical models that has previously provided theoretical and practical insights on iterative belief propagation and some of its generalizations.

Decomposing Parameter Estimation Problems

no code implementations NeurIPS 2014 Khaled S. Refaat, Arthur Choi, Adnan Darwiche

We propose a technique for decomposing the parameter learning problem in Bayesian networks into independent learning problems.

Efficient Algorithms for Bayesian Network Parameter Learning from Incomplete Data

no code implementations25 Nov 2014 Guy Van den Broeck, Karthika Mohan, Arthur Choi, Judea Pearl

In contrast to textbook approaches such as EM and the gradient method, our approach is non-iterative, yields closed form parameter estimates, and eliminates the need for inference in a Bayesian network.

EDML for Learning Parameters in Directed and Undirected Graphical Models

no code implementations NeurIPS 2013 Khaled S. Refaat, Arthur Choi, Adnan Darwiche

Second, it facilitates the design of EDML algorithms for new graphical models, leading to a new algorithm for learning parameters in Markov networks.

Approximating MAP by Compensating for Structural Relaxations

no code implementations NeurIPS 2009 Arthur Choi, Adnan Darwiche

We identify a second approach to compensation that is based on a more refined idealized case, resulting in a new approximation with distinct properties.

Cannot find the paper you are looking for? You can Submit a new open access paper.