no code implementations • 18 Jul 2024 • Sitan Chen, Jerry Li, Allen Liu
We give the first tight sample complexity bounds for shadow tomography and classical shadows in the regime where the target error is below some sufficiently small inverse polynomial in the dimension of the Hilbert space.
no code implementations • 30 Apr 2024 • Ainesh Bakshi, Allen Liu, Ankur Moitra, Ewin Tang
We study the problem of Hamiltonian structure learning from real-time evolution: given the ability to apply $e^{-\mathrm{i} Ht}$ for an unknown local Hamiltonian $H = \sum_{a = 1}^m \lambda_a E_a$ on $n$ qubits, the goal is to recover $H$.
no code implementations • 26 Feb 2024 • Sitan Chen, Jerry Li, Allen Liu
In this work, we study tomography in the natural setting where one can make measurements of $t$ copies at a time.
no code implementations • 3 Oct 2023 • Ainesh Bakshi, Allen Liu, Ankur Moitra, Ewin Tang
Anshu, Arunachalam, Kuwahara, and Soleimanifar (arXiv:2004. 07266) gave an algorithm to learn a Hamiltonian on $n$ qubits to precision $\epsilon$ with only polynomially many copies of the Gibbs state, but which takes exponential time.
no code implementations • 7 Aug 2023 • Jonathan A. Kelner, Jerry Li, Allen Liu, Aaron Sidford, Kevin Tian
In the well-studied setting where $\mathbf{M}$ has incoherent row and column spans, our algorithms complete $\mathbf{M}$ to high precision from $mr^{2+o(1)}$ observations in $mr^{3 + o(1)}$ time (omitting logarithmic factors in problem parameters), improving upon the prior state-of-the-art [JN15] which used $\approx mr^5$ samples and $\approx mr^7$ time.
no code implementations • 13 Jul 2023 • Ainesh Bakshi, Allen Liu, Ankur Moitra, Morris Yau
In this work we give a new approach to learning mixtures of linear dynamical systems that is based on tensor decompositions.
no code implementations • 23 Jan 2023 • Ainesh Bakshi, Allen Liu, Ankur Moitra, Morris Yau
Linear dynamical systems are the foundational statistical model upon which control theory is built.
no code implementations • 25 Jul 2022 • Allen Liu, Ankur Moitra
In this work, we study the problem of community detection in the stochastic block model with adversarial node corruptions.
no code implementations • 10 Jun 2022 • Sitan Chen, Brice Huang, Jerry Li, Allen Liu, Mark Sellke
We give an adaptive algorithm that outputs a state which is $\gamma$-close in infidelity to $\rho$ using only $\tilde{O}(d^3/\gamma)$ copies, which is optimal for incoherent measurements.
no code implementations • 14 Apr 2022 • Sitan Chen, Brice Huang, Jerry Li, Allen Liu
When $\sigma$ is the maximally mixed state $\frac{1}{d} I_d$, this is known as mixedness testing.
no code implementations • 8 Mar 2022 • Jonathan A. Kelner, Jerry Li, Allen Liu, Aaron Sidford, Kevin Tian
We design a new iterative method tailored to the geometry of sparse recovery which is provably robust to our semi-random model.
no code implementations • 19 Feb 2022 • Allen Liu, Mark Sellke
We ask whether it is possible to obtain optimal instance-dependent regret $\tilde{O}(1/\Delta)$ where $\Delta$ is the gap between the $m$-th and $m+1$-st best arms.
no code implementations • 13 Dec 2021 • Allen Liu, Ankur Moitra
Maximum likelihood estimation furnishes powerful insights into voting theory, and the design of voting rules.
no code implementations • 1 Dec 2021 • Jerry Li, Allen Liu
We give the first algorithm which runs in polynomial time, and which almost matches this guarantee.
no code implementations • NeurIPS 2021 • Guru Guruganesh, Allen Liu, Jon Schneider, Joshua Wang
We consider the problem of multi-class classification, where a stream of adversarially chosen queries arrive and must be assigned a label online.
no code implementations • 5 Jun 2021 • Jerry Li, Allen Liu, Ankur Moitra
Given $\textsf{poly}(k/\epsilon)$ samples from a distribution that is $\epsilon$-close in TV distance to a GMM with $k$ components, we can construct a GMM with $\widetilde{O}(k)$ components that approximates the distribution to within $\widetilde{O}(\epsilon)$ in $\textsf{poly}(k/\epsilon)$ time.
no code implementations • 4 Jun 2021 • Allen Liu, Ankur Moitra
Our main result is a quasi-polynomial time algorithm for orbit recovery over $SO(3)$ in this model.
no code implementations • 19 Apr 2021 • Allen Liu, Ankur Moitra
In this work we solve the problem of robustly learning a high-dimensional Gaussian mixture model with $k$ components from $\epsilon$-corrupted samples up to accuracy $\widetilde{O}(\epsilon)$ in total variation distance for any constant $k$ and with mild assumptions on the mixture.
no code implementations • NeurIPS 2020 • Allen Liu, Renato Leme, Jon Schneider
Motivated by pricing applications in online advertising, we study a variant of linear regression with a discontinuous loss function that we term Myersonian regression.
no code implementations • 6 Nov 2020 • Allen Liu, Ankur Moitra
This work represents a natural coalescence of two important lines of work: learning mixtures of Gaussians and algorithmic robust statistics.
no code implementations • NeurIPS 2020 • Allen Liu, Ankur Moitra
We show strong provable guarantees, including showing that our algorithm converges linearly to the true tensors even when the factors are highly correlated and can be implemented in nearly linear time.
no code implementations • 3 Mar 2020 • Allen Liu, Renato Paes Leme, Jon Schneider
We provide a generic algorithm with $O(d^2)$ regret where $d$ is the covering dimension of this class.
no code implementations • 17 Aug 2018 • Allen Liu, Ankur Moitra
Mixtures of Mallows models are a popular generative model for ranking data coming from a heterogeneous population.