Search Results for author: Andrew Mcgregor

Found 10 papers, 1 papers with code

Improving the Efficiency of the PC Algorithm by Using Model-Based Conditional Independence Tests

no code implementations12 Nov 2022 Erica Cai, Andrew Mcgregor, David Jensen

We propose such a pre-processing step for the PC algorithm which relies on performing CI tests on a few randomly selected large conditioning sets.

Estimation of Entropy in Constant Space with Improved Sample Complexity

no code implementations19 May 2022 Maryam Aliakbarpour, Andrew Mcgregor, Jelani Nelson, Erik Waingarten

Recent work of Acharya et al. (NeurIPS 2019) showed how to estimate the entropy of a distribution $\mathcal D$ over an alphabet of size $k$ up to $\pm\epsilon$ additive error by streaming over $(k/\epsilon^3) \cdot \text{polylog}(1/\epsilon)$ i. i. d.

Intervention Efficient Algorithms for Approximate Learning of Causal Graphs

no code implementations27 Dec 2020 Raghavendra Addanki, Andrew Mcgregor, Cameron Musco

Our goal is to recover the directions of all causal or ancestral relations in $G$, via a minimum cost set of interventions.

Efficient Intervention Design for Causal Discovery with Latents

no code implementations ICML 2020 Raghavendra Addanki, Shiva Prasad Kasiviswanathan, Andrew Mcgregor, Cameron Musco

We consider recovering a causal graph in presence of latent variables, where we seek to minimize the cost of interventions used in the recovery process.

Causal Discovery

Data Structures & Algorithms for Exact Inference in Hierarchical Clustering

1 code implementation26 Feb 2020 Craig S. Greenberg, Sebastian Macaluso, Nicholas Monath, Ji-Ah Lee, Patrick Flaherty, Kyle Cranmer, Andrew Mcgregor, Andrew McCallum

In contrast to existing methods, we present novel dynamic-programming algorithms for \emph{exact} inference in hierarchical clustering based on a novel trellis data structure, and we prove that we can exactly compute the partition function, maximum likelihood hierarchy, and marginal probabilities of sub-hierarchies and clusters.

Clustering Small Data Image Classification

Algebraic and Analytic Approaches for Parameter Learning in Mixture Models

no code implementations19 Jan 2020 Akshay Krishnamurthy, Arya Mazumdar, Andrew Mcgregor, Soumyabrata Pal

Our second approach uses algebraic and combinatorial tools and applies to binomial mixtures with shared trial parameter $N$ and differing success parameters, as well as to mixtures of geometric distributions.

Sample Complexity of Learning Mixture of Sparse Linear Regressions

no code implementations NeurIPS 2019 Akshay Krishnamurthy, Arya Mazumdar, Andrew Mcgregor, Soumyabrata Pal

Ourtechniques are quite different from those in the previous work: for the noiselesscase, we rely on a property of sparse polynomials and for the noisy case, we providenew connections to learning Gaussian mixtures and use ideas from the theory of

Open-Ended Question Answering

Sample Complexity of Learning Mixtures of Sparse Linear Regressions

no code implementations30 Oct 2019 Akshay Krishnamurthy, Arya Mazumdar, Andrew Mcgregor, Soumyabrata Pal

In the problem of learning mixtures of linear regressions, the goal is to learn a collection of signal vectors from a sequence of (possibly noisy) linear measurements, where each measurement is evaluated on an unknown signal drawn uniformly from this collection.

Open-Ended Question Answering

Mesh: Compacting Memory Management for C/C++ Applications

no code implementations13 Feb 2019 Bobby Powers, David Tench, Emery D. Berger, Andrew Mcgregor

Programs written in C/C++ can suffer from serious memory fragmentation, leading to low utilization of memory, degraded performance, and application failure due to memory exhaustion.

Programming Languages Data Structures and Algorithms Performance

Compact Representation of Uncertainty in Clustering

no code implementations NeurIPS 2018 Craig Greenberg, Nicholas Monath, Ari Kobren, Patrick Flaherty, Andrew Mcgregor, Andrew McCallum

For many classic structured prediction problems, probability distributions over the dependent variables can be efficiently computed using widely-known algorithms and data structures (such as forward-backward, and its corresponding trellis for exact probability distributions in Markov models).

Clustering Small Data Image Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.