no code implementations • 13 Oct 2023 • Spencer L. Gordon, Manav Kant, Eric Ma, Leonard J. Schulman, Andrei Staicu
We show: (a) When the latents are uniformly distributed, the model is identifiable with a number of observables equal to the number of parameters (and hence best possible).
no code implementations • 25 Sep 2023 • Spencer L. Gordon, Erik Jahn, Bijan Mazaheri, Yuval Rabani, Leonard J. Schulman
We consider the problem of identifying, from statistics, a distribution of discrete random variables $X_1,\ldots, X_n$ that is a mixture of $k$ product distributions.
no code implementations • 22 Dec 2021 • Spencer L. Gordon, Bijan Mazaheri, Yuval Rabani, Leonard J. Schulman
A Bayesian Network is a directed acyclic graph (DAG) on a set of $n$ random variables (the vertices); a Bayesian Network Distribution (BND) is a probability distribution on the random variables that is Markovian on the graph.
no code implementations • 27 Jan 2021 • Spencer L. Gordon, Leonard J. Schulman
The Hadamard Extension of a matrix is the matrix consisting of all Hadamard products of subsets of its rows.
no code implementations • 29 Dec 2020 • Spencer L. Gordon, Bijan Mazaheri, Yuval Rabani, Leonard J. Schulman
We give an algorithm for source identification of a mixture of $k$ product distributions on $n$ bits.
no code implementations • 16 Jul 2020 • Spencer Gordon, Bijan Mazaheri, Leonard J. Schulman, Yuval Rabani
We give an algorithm for identifying a $k$-mixture using samples of $m=2k$ iid binary random variables using a sample of size $\left(1/w_{\min}\right)^2 \cdot\left(1/\zeta\right)^{O(k)}$ and post-sampling runtime of only $O(k^{2+o(1)})$ arithmetic operations.
no code implementations • 10 Apr 2015 • Jian Li, Yuval Rabani, Leonard J. Schulman, Chaitanya Swamy
We study the problem of learning from unlabeled samples very general statistical mixture models on large finite sets.