Search Results for author: Arnab Bhattacharyya

Found 22 papers, 6 papers with code

Outlier Robust Multivariate Polynomial Regression

no code implementations14 Mar 2024 Vipul Arora, Arnab Bhattacharyya, Mathews Boban, Venkatesan Guruswami, Esty Kelman

Furthermore, we show that it is possible to have the run-time be independent of $1/\sigma$, at the cost of a higher sample complexity.

regression

Optimal estimation of Gaussian (poly)trees

1 code implementation9 Feb 2024 Yuhao Wang, Ming Gao, Wai Ming Tai, Bryon Aragam, Arnab Bhattacharyya

We develop optimal algorithms for learning undirected Gaussian trees and directed Gaussian polytrees from data.

Learning bounded-degree polytrees with known skeleton

no code implementations10 Oct 2023 Davin Choo, Joy Qiping Yang, Arnab Bhattacharyya, Clément L. Canonne

We establish finite-sample guarantees for efficient proper learning of bounded-degree polytrees, a rich class of high-dimensional probability distributions and a subclass of Bayesian networks, a widely-studied type of graphical model.

Total Variation Distance Estimation Is as Easy as Probabilistic Inference

no code implementations17 Sep 2023 Arnab Bhattacharyya, Sutanu Gayen, Kuldeep S. Meel, Dimitrios Myrisiotis, A. Pavan, N. V. Vinodchandran

In particular, we present an efficient, structure-preserving reduction from relative approximation of TV distance to probabilistic inference over directed graphical models.

Active causal structure learning with advice

1 code implementation31 May 2023 Davin Choo, Themis Gouleakis, Arnab Bhattacharyya

When the advice is a DAG $G$, we design an adaptive search algorithm to recover $G^*$ whose intervention cost is at most $O(\max\{1, \log \psi\})$ times the cost for verifying $G^*$; here, $\psi$ is a distance measure between $G$ and $G^*$ that is upper bounded by the number of variables $n$, and is exactly 0 when $G=G^*$.

Near-Optimal Degree Testing for Bayes Nets

no code implementations13 Apr 2023 Vipul Arora, Arnab Bhattacharyya, Clément L. Canonne, Joy Qiping Yang

This paper considers the problem of testing the maximum in-degree of the Bayes net underlying an unknown probability distribution $P$ over $\{0, 1\}^n$, given sample access to $P$.

On the Interventional Kullback-Leibler Divergence

no code implementations10 Feb 2023 Jonas Wildberger, Siyuan Guo, Arnab Bhattacharyya, Bernhard Schölkopf

Modern machine learning approaches excel in static settings where a large amount of i. i. d.

An Adaptive Kernel Approach to Federated Learning of Heterogeneous Causal Effects

1 code implementation1 Jan 2023 Thanh Vinh Vo, Arnab Bhattacharyya, Young Lee, Tze-Yun Leong

We propose a new causal inference framework to learn causal effects from multiple, decentralized data sources in a federated setting.

Causal Inference Federated Learning

Verification and search algorithms for causal DAGs

4 code implementations30 Jun 2022 Davin Choo, Kirankumar Shiragur, Arnab Bhattacharyya

Our result is the first known algorithm that gives a non-trivial approximation guarantee to the verifying size on general unweighted graphs and with bounded size interventions.

Independence Testing for Bounded Degree Bayesian Network

no code implementations19 Apr 2022 Arnab Bhattacharyya, Clément L. Canonne, Joy Qiping Yang

We study the following independence testing problem: given access to samples from a distribution $P$ over $\{0, 1\}^n$, decide whether $P$ is a product distribution or whether it is $\varepsilon$-far in total variation distance from any product distribution.

Efficient inference of interventional distributions

no code implementations25 Jul 2021 Arnab Bhattacharyya, Sutanu Gayen, Saravanan Kandasamy, Vedant Raval, N. V. Vinodchandran

For sets $\mathbf{X},\mathbf{Y}\subseteq \mathbf{V}$, and setting ${\bf x}$ to $\mathbf{X}$, let $P_{\bf x}(\mathbf{Y})$ denote the interventional distribution on $\mathbf{Y}$ with respect to an intervention ${\bf x}$ to variables ${\bf x}$.

Learning Sparse Fixed-Structure Gaussian Bayesian Networks

1 code implementation22 Jul 2021 Arnab Bhattacharyya, Davin Choo, Rishikesh Gajjala, Sutanu Gayen, Yuhao Wang

We also study a couple of new algorithms for the problem: - BatchAvgLeastSquares takes the average of several batches of least squares solutions at each node, so that one can interpolate between the batch size and the number of batches.

Identifiability of AMP chain graph models

1 code implementation17 Jun 2021 Yuhao Wang, Arnab Bhattacharyya

AMP models are described by DAGs on chain components which themselves are undirected graphs.

Testing Product Distributions: A Closer Look

no code implementations29 Dec 2020 Arnab Bhattacharyya, Sutanu Gayen, Saravanan Kandasamy, N. V. Vinodchandran

We study the problems of identity and closeness testing of $n$-dimensional product distributions.

Eco-Routing Using Open Street Maps

no code implementations27 Nov 2020 R K Ghosh, Vinay R, Arnab Bhattacharyya

A vehicle's fuel consumption depends on its type, the speed, the condition, and the gradients of the road on which it is moving.

Near-Optimal Learning of Tree-Structured Distributions by Chow-Liu

no code implementations9 Nov 2020 Arnab Bhattacharyya, Sutanu Gayen, Eric Price, N. V. Vinodchandran

For a distribution $P$ on $\Sigma^n$ and a tree $T$ on $n$ nodes, we say $T$ is an $\varepsilon$-approximate tree for $P$ if there is a $T$-structured distribution $Q$ such that $D(P\;||\;Q)$ is at most $\varepsilon$ more than the best possible tree-structured distribution for $P$.

Efficient Statistics for Sparse Graphical Models from Truncated Samples

no code implementations17 Jun 2020 Arnab Bhattacharyya, Rathin Desai, Sai Ganesh Nagarajan, Ioannis Panageas

We show that ${\mu}$ and ${\Sigma}$ can be estimated with error $\epsilon$ in the Frobenius norm, using $\tilde{O}\left(\frac{\textrm{nz}({\Sigma}^{-1})}{\epsilon^2}\right)$ samples from a truncated $\mathcal{N}({\mu},{\Sigma})$ and having access to a membership oracle for $S$.

Learning and Sampling of Atomic Interventions from Observations

no code implementations ICML 2020 Arnab Bhattacharyya, Sutanu Gayen, Saravanan Kandasamy, Ashwin Maran, N. V. Vinodchandran

Assuming that $G$ has bounded in-degree, bounded c-components ($k$), and that the observational distribution is identifiable and satisfies certain strong positivity condition, we give an algorithm that takes $m=\tilde{O}(n\epsilon^{-2})$ samples from $P$ and $O(mn)$ time, and outputs with high probability a description of a distribution $\hat{P}$ such that $d_{\mathrm{TV}}(P_x, \hat{P}) \leq \epsilon$, and: 1.

Fishing out Winners from Vote Streams

no code implementations19 Aug 2015 Arnab Bhattacharyya, Palash Dey

We investigate the problem of winner determination from computational social choice theory in the data stream model.

On learning k-parities with and without noise

no code implementations18 Feb 2015 Arnab Bhattacharyya, Ameet Gadekar, Ninad Rajgopal

We improve the previous best result of Buhrman et al. by an $\exp(k)$ factor in the time complexity.

Learning Theory

Cannot find the paper you are looking for? You can Submit a new open access paper.