Search Results for author: Murat Kocaoglu

Found 17 papers, 5 papers with code

Entropic Causal Inference: Identifiability and Finite Sample Results

no code implementations NeurIPS 2020 Spencer Compton, Murat Kocaoglu, Kristjan Greenewald, Dmitriy Katz

This unobserved randomness is measured by the entropy of the exogenous variable in the underlying structural causal model, which governs the causal relation between the observed variables.

Causal Identification Causal Inference

Active Structure Learning of Causal DAGs via Directed Clique Trees

1 code implementation NeurIPS 2020 Chandler Squires, Sara Magliacane, Kristjan Greenewald, Dmitriy Katz, Murat Kocaoglu, Karthikeyan Shanmugam

Most existing works focus on \textit{worst-case} or \textit{average-case} lower bounds for the number of interventions required to orient a DAG.

Selection bias

Causal Discovery from Soft Interventions with Unknown Targets: Characterization and Learning

no code implementations NeurIPS 2020 Amin Jaber, Murat Kocaoglu, Karthikeyan Shanmugam, Elias Bareinboim

One fundamental problem in the empirical sciences is of reconstructing the causal structure that underlies a phenomenon of interest through observation and experimentation.

Causal Discovery

Active Structure Learning of Causal DAGs via Directed Clique Tree

2 code implementations1 Nov 2020 Chandler Squires, Sara Magliacane, Kristjan Greenewald, Dmitriy Katz, Murat Kocaoglu, Karthikeyan Shanmugam

Most existing works focus on worst-case or average-case lower bounds for the number of interventions required to orient a DAG.

Selection bias

Characterization and Learning of Causal Graphs with Latent Variables from Soft Interventions

no code implementations NeurIPS 2019 Murat Kocaoglu, Amin Jaber, Karthikeyan Shanmugam, Elias Bareinboim

We introduce a novel notion of interventional equivalence class of causal graphs with latent variables based on these invariances, which associates each graphical structure with a set of interventional distributions that respect the do-calculus rules.

Experimental Design for Cost-Aware Learning of Causal Graphs

no code implementations NeurIPS 2018 Erik M. Lindgren, Murat Kocaoglu, Alexandros G. Dimakis, Sriram Vishwanath

We consider the minimum cost intervention design problem: Given the essential graph of a causal graph and a cost to intervene on a variable, identify the set of interventions with minimum total cost that can learn any causal graph with the given essential graph.

Experimental Design

Applications of Common Entropy for Causal Inference

no code implementations NeurIPS 2020 Murat Kocaoglu, Sanjay Shakkottai, Alexandros G. Dimakis, Constantine Caramanis, Sriram Vishwanath

We study the problem of discovering the simplest latent variable that can make two observed discrete variables conditionally independent.

Causal Inference

Experimental Design for Learning Causal Graphs with Latent Variables

no code implementations NeurIPS 2017 Murat Kocaoglu, Karthikeyan Shanmugam, Elias Bareinboim

Next, we propose an algorithm that uses only O(d^2 log n) interventions that can learn the latents between both non-adjacent and adjacent variables.

Experimental Design

CausalGAN: Learning Causal Implicit Generative Models with Adversarial Training

2 code implementations ICLR 2018 Murat Kocaoglu, Christopher Snyder, Alexandros G. Dimakis, Sriram Vishwanath

We show that adversarial training can be used to learn a generative model with true observational and interventional distributions if the generator architecture is consistent with the given causal graph.

Face Generation

Sparse Quadratic Logistic Regression in Sub-quadratic Time

no code implementations8 Mar 2017 Karthikeyan Shanmugam, Murat Kocaoglu, Alexandros G. Dimakis, Sujay Sanghavi

We consider support recovery in the quadratic logistic regression setting - where the target depends on both p linear terms $x_i$ and up to $p^2$ quadratic terms $x_i x_j$.

regression

Cost-Optimal Learning of Causal Graphs

no code implementations ICML 2017 Murat Kocaoglu, Alexandros G. Dimakis, Sriram Vishwanath

We consider the problem of learning a causal graph over a set of variables with interventions.

Graph Learning

Entropic Causality and Greedy Minimum Entropy Coupling

no code implementations28 Jan 2017 Murat Kocaoglu, Alexandros G. Dimakis, Sriram Vishwanath, Babak Hassibi

This framework requires the solution of a minimum entropy coupling problem: Given marginal distributions of m discrete random variables, each on n states, find the joint distribution with minimum entropy, that respects the given marginals.

Entropic Causal Inference

1 code implementation12 Nov 2016 Murat Kocaoglu, Alexandros G. Dimakis, Sriram Vishwanath, Babak Hassibi

We show that the problem of finding the exogenous variable with minimum entropy is equivalent to the problem of finding minimum joint entropy given $n$ marginal distributions, also known as minimum entropy coupling problem.

Causal Inference

Contextual Bandits with Latent Confounders: An NMF Approach

no code implementations1 Jun 2016 Rajat Sen, Karthikeyan Shanmugam, Murat Kocaoglu, Alexandros G. Dimakis, Sanjay Shakkottai

Our algorithm achieves a regret of $\mathcal{O}\left(L\mathrm{poly}(m, \log K) \log T \right)$ at time $T$, as compared to $\mathcal{O}(LK\log T)$ for conventional contextual bandits, assuming a constant gap between the best arm and the rest for each context.

Matrix Completion Multi-Armed Bandits

Learning Causal Graphs with Small Interventions

1 code implementation NeurIPS 2015 Karthikeyan Shanmugam, Murat Kocaoglu, Alexandros G. Dimakis, Sriram Vishwanath

We prove that any deterministic adaptive algorithm needs to be a separating system in order to learn complete graphs in the worst case.

Sparse Polynomial Learning and Graph Sketching

no code implementations NeurIPS 2014 Murat Kocaoglu, Karthikeyan Shanmugam, Alexandros G. Dimakis, Adam Klivans

We give an algorithm for exactly reconstructing f given random examples from the uniform distribution on $\{-1, 1\}^n$ that runs in time polynomial in $n$ and $2s$ and succeeds if the function satisfies the unique sign property: there is one output value which corresponds to a unique set of values of the participating parities.

Cannot find the paper you are looking for? You can Submit a new open access paper.