no code implementations • 14 Apr 2022 • Sitan Chen, Brice Huang, Jerry Li, Allen Liu
When $\sigma$ is the maximally mixed state $\frac{1}{d} I_d$, this is known as mixedness testing.
no code implementations • 8 Mar 2022 • Jonathan A. Kelner, Jerry Li, Allen Liu, Aaron Sidford, Kevin Tian
We design a new iterative method tailored to the geometry of sparse recovery which is provably robust to our semi-random model.
no code implementations • 19 Feb 2022 • Allen Liu, Mark Sellke
We ask whether it is possible to obtain optimal instance-dependent regret $\tilde{O}(1/\Delta)$ where $\Delta$ is the gap between the $m$-th and $m+1$-st best arms.
no code implementations • 13 Dec 2021 • Allen Liu, Ankur Moitra
In this work we study the problem of robustly learning a Mallows model.
no code implementations • 1 Dec 2021 • Jerry Li, Allen Liu
We give the first algorithm which runs in polynomial time, and which almost matches this guarantee.
no code implementations • NeurIPS 2021 • Guru Guruganesh, Allen Liu, Jon Schneider, Joshua Wang
We consider the problem of multi-class classification, where a stream of adversarially chosen queries arrive and must be assigned a label online.
no code implementations • 5 Jun 2021 • Jerry Li, Allen Liu, Ankur Moitra
In particular, we give the first algorithms for approximately (and robustly) determining the number of components in a Gaussian mixture model that work without a separation condition.
no code implementations • 4 Jun 2021 • Allen Liu, Ankur Moitra
Our main result is a quasi-polynomial time algorithm for orbit recovery over $SO(3)$ in this model.
no code implementations • 19 Apr 2021 • Allen Liu, Ankur Moitra
In this work we solve the problem of robustly learning a high-dimensional Gaussian mixture model with $k$ components from $\epsilon$-corrupted samples up to accuracy $\widetilde{O}(\epsilon)$ in total variation distance for any constant $k$ and with mild assumptions on the mixture.
no code implementations • NeurIPS 2020 • Allen Liu, Renato Leme, Jon Schneider
Motivated by pricing applications in online advertising, we study a variant of linear regression with a discontinuous loss function that we term Myersonian regression.
no code implementations • 6 Nov 2020 • Allen Liu, Ankur Moitra
This work represents a natural coalescence of two important lines of work: learning mixtures of Gaussians and algorithmic robust statistics.
no code implementations • NeurIPS 2020 • Allen Liu, Ankur Moitra
We show strong provable guarantees, including showing that our algorithm converges linearly to the true tensors even when the factors are highly correlated and can be implemented in nearly linear time.
no code implementations • 3 Mar 2020 • Allen Liu, Renato Paes Leme, Jon Schneider
We provide a generic algorithm with $O(d^2)$ regret where $d$ is the covering dimension of this class.
no code implementations • 17 Aug 2018 • Allen Liu, Ankur Moitra
Mixtures of Mallows models are a popular generative model for ranking data coming from a heterogeneous population.