no code implementations • NeurIPS 2014 • Murat Kocaoglu, Karthikeyan Shanmugam, Alexandros G. Dimakis, Adam Klivans
We give an algorithm for exactly reconstructing f given random examples from the uniform distribution on $\{-1, 1\}^n$ that runs in time polynomial in $n$ and $2s$ and succeeds if the function satisfies the unique sign property: there is one output value which corresponds to a unique set of values of the participating parities.
2 code implementations • NeurIPS 2015 • Karthikeyan Shanmugam, Murat Kocaoglu, Alexandros G. Dimakis, Sriram Vishwanath
We prove that any deterministic adaptive algorithm needs to be a separating system in order to learn complete graphs in the worst case.
no code implementations • 1 Jun 2016 • Rajat Sen, Karthikeyan Shanmugam, Murat Kocaoglu, Alexandros G. Dimakis, Sanjay Shakkottai
Our algorithm achieves a regret of $\mathcal{O}\left(L\mathrm{poly}(m, \log K) \log T \right)$ at time $T$, as compared to $\mathcal{O}(LK\log T)$ for conventional contextual bandits, assuming a constant gap between the best arm and the rest for each context.
1 code implementation • 12 Nov 2016 • Murat Kocaoglu, Alexandros G. Dimakis, Sriram Vishwanath, Babak Hassibi
We show that the problem of finding the exogenous variable with minimum entropy is equivalent to the problem of finding minimum joint entropy given $n$ marginal distributions, also known as minimum entropy coupling problem.
no code implementations • 28 Jan 2017 • Murat Kocaoglu, Alexandros G. Dimakis, Sriram Vishwanath, Babak Hassibi
This framework requires the solution of a minimum entropy coupling problem: Given marginal distributions of m discrete random variables, each on n states, find the joint distribution with minimum entropy, that respects the given marginals.
no code implementations • 8 Mar 2017 • Karthikeyan Shanmugam, Murat Kocaoglu, Alexandros G. Dimakis, Sujay Sanghavi
We consider support recovery in the quadratic logistic regression setting - where the target depends on both p linear terms $x_i$ and up to $p^2$ quadratic terms $x_i x_j$.
1 code implementation • ICML 2017 • Murat Kocaoglu, Alexandros G. Dimakis, Sriram Vishwanath
We consider the problem of learning a causal graph over a set of variables with interventions.
2 code implementations • ICLR 2018 • Murat Kocaoglu, Christopher Snyder, Alexandros G. Dimakis, Sriram Vishwanath
We show that adversarial training can be used to learn a generative model with true observational and interventional distributions if the generator architecture is consistent with the given causal graph.
no code implementations • NeurIPS 2017 • Murat Kocaoglu, Karthikeyan Shanmugam, Elias Bareinboim
Next, we propose an algorithm that uses only O(d^2 log n) interventions that can learn the latents between both non-adjacent and adjacent variables.
no code implementations • NeurIPS 2020 • Murat Kocaoglu, Sanjay Shakkottai, Alexandros G. Dimakis, Constantine Caramanis, Sriram Vishwanath
We study the problem of discovering the simplest latent variable that can make two observed discrete variables conditionally independent.
no code implementations • NeurIPS 2018 • Erik M. Lindgren, Murat Kocaoglu, Alexandros G. Dimakis, Sriram Vishwanath
We consider the minimum cost intervention design problem: Given the essential graph of a causal graph and a cost to intervene on a variable, identify the set of interventions with minimum total cost that can learn any causal graph with the given essential graph.
no code implementations • NeurIPS 2019 • Kristjan Greenewald, Dmitriy Katz, Karthikeyan Shanmugam, Sara Magliacane, Murat Kocaoglu, Enric Boix Adsera, Guy Bresler
We consider the problem of experimental design for learning causal graphs that have a tree structure.
no code implementations • NeurIPS 2019 • Murat Kocaoglu, Amin Jaber, Karthikeyan Shanmugam, Elias Bareinboim
We introduce a novel notion of interventional equivalence class of causal graphs with latent variables based on these invariances, which associates each graphical structure with a set of interventional distributions that respect the do-calculus rules.
4 code implementations • 1 Nov 2020 • Chandler Squires, Sara Magliacane, Kristjan Greenewald, Dmitriy Katz, Murat Kocaoglu, Karthikeyan Shanmugam
Most existing works focus on worst-case or average-case lower bounds for the number of interventions required to orient a DAG.
no code implementations • NeurIPS 2020 • Amin Jaber, Murat Kocaoglu, Karthikeyan Shanmugam, Elias Bareinboim
One fundamental problem in the empirical sciences is of reconstructing the causal structure that underlies a phenomenon of interest through observation and experimentation.
1 code implementation • NeurIPS 2020 • Chandler Squires, Sara Magliacane, Kristjan Greenewald, Dmitriy Katz, Murat Kocaoglu, Karthikeyan Shanmugam
Most existing works focus on \textit{worst-case} or \textit{average-case} lower bounds for the number of interventions required to orient a DAG.
no code implementations • NeurIPS 2020 • Spencer Compton, Murat Kocaoglu, Kristjan Greenewald, Dmitriy Katz
This unobserved randomness is measured by the entropy of the exogenous variable in the underlying structural causal model, which governs the causal relation between the observed variables.
1 code implementation • 20 Jun 2023 • Zeyu Zhou, Ruqi Bai, Sean Kulinski, Murat Kocaoglu, David I. Inouye
Answering counterfactual queries has important applications such as explainability, robustness, and fairness but is challenging when the causal variables are unobserved and the observations are non-linear mixtures of these latent variables, such as pixels in images.
no code implementations • 22 Jun 2023 • Ziwei Jiang, Lai Wei, Murat Kocaoglu
We show that our bounds are consistent in the sense that as the entropy of unobserved confounders goes to zero, the gap between the upper and lower bound vanishes.
no code implementations • 2 Jan 2024 • Md Musfiqur Rahman, Murat Kocaoglu
To address this, we propose a sequential training algorithm that, given the causal structure and a pre-trained conditional generative model, can train a deep causal generative model, which utilizes the pre-trained model and can provably sample from identifiable interventional and counterfactual distributions.
no code implementations • 12 Feb 2024 • Md Musfiqur Rahman, Matt Jordan, Murat Kocaoglu
As an application of our algorithm, we evaluate two large conditional generative models that are pre-trained on the CelebA dataset by analyzing the strength of spurious correlations and the level of disentanglement they achieve.