no code implementations • 12 Nov 2022 • Erica Cai, Andrew Mcgregor, David Jensen
We propose such a pre-processing step for the PC algorithm which relies on performing CI tests on a few randomly selected large conditioning sets.
no code implementations • 19 May 2022 • Maryam Aliakbarpour, Andrew Mcgregor, Jelani Nelson, Erik Waingarten
Recent work of Acharya et al. (NeurIPS 2019) showed how to estimate the entropy of a distribution $\mathcal D$ over an alphabet of size $k$ up to $\pm\epsilon$ additive error by streaming over $(k/\epsilon^3) \cdot \text{polylog}(1/\epsilon)$ i. i. d.
no code implementations • 27 Dec 2020 • Raghavendra Addanki, Andrew Mcgregor, Cameron Musco
Our goal is to recover the directions of all causal or ancestral relations in $G$, via a minimum cost set of interventions.
no code implementations • ICML 2020 • Raghavendra Addanki, Shiva Prasad Kasiviswanathan, Andrew Mcgregor, Cameron Musco
We consider recovering a causal graph in presence of latent variables, where we seek to minimize the cost of interventions used in the recovery process.
1 code implementation • 26 Feb 2020 • Craig S. Greenberg, Sebastian Macaluso, Nicholas Monath, Ji-Ah Lee, Patrick Flaherty, Kyle Cranmer, Andrew Mcgregor, Andrew McCallum
In contrast to existing methods, we present novel dynamic-programming algorithms for \emph{exact} inference in hierarchical clustering based on a novel trellis data structure, and we prove that we can exactly compute the partition function, maximum likelihood hierarchy, and marginal probabilities of sub-hierarchies and clusters.
no code implementations • 19 Jan 2020 • Akshay Krishnamurthy, Arya Mazumdar, Andrew Mcgregor, Soumyabrata Pal
Our second approach uses algebraic and combinatorial tools and applies to binomial mixtures with shared trial parameter $N$ and differing success parameters, as well as to mixtures of geometric distributions.
no code implementations • NeurIPS 2019 • Akshay Krishnamurthy, Arya Mazumdar, Andrew Mcgregor, Soumyabrata Pal
Ourtechniques are quite different from those in the previous work: for the noiselesscase, we rely on a property of sparse polynomials and for the noisy case, we providenew connections to learning Gaussian mixtures and use ideas from the theory of
no code implementations • 30 Oct 2019 • Akshay Krishnamurthy, Arya Mazumdar, Andrew Mcgregor, Soumyabrata Pal
In the problem of learning mixtures of linear regressions, the goal is to learn a collection of signal vectors from a sequence of (possibly noisy) linear measurements, where each measurement is evaluated on an unknown signal drawn uniformly from this collection.
no code implementations • 13 Feb 2019 • Bobby Powers, David Tench, Emery D. Berger, Andrew Mcgregor
Programs written in C/C++ can suffer from serious memory fragmentation, leading to low utilization of memory, degraded performance, and application failure due to memory exhaustion.
Programming Languages Data Structures and Algorithms Performance
no code implementations • NeurIPS 2018 • Craig Greenberg, Nicholas Monath, Ari Kobren, Patrick Flaherty, Andrew Mcgregor, Andrew McCallum
For many classic structured prediction problems, probability distributions over the dependent variables can be efficiently computed using widely-known algorithms and data structures (such as forward-backward, and its corresponding trellis for exact probability distributions in Markov models).