Search Results for author: Allen Liu

Found 23 papers, 0 papers with code

Optimal high-precision shadow estimation

no code implementations18 Jul 2024 Sitan Chen, Jerry Li, Allen Liu

We give the first tight sample complexity bounds for shadow tomography and classical shadows in the regime where the target error is below some sufficiently small inverse polynomial in the dimension of the Hilbert space.

Dimensionality Reduction

Structure learning of Hamiltonians from real-time evolution

no code implementations30 Apr 2024 Ainesh Bakshi, Allen Liu, Ankur Moitra, Ewin Tang

We study the problem of Hamiltonian structure learning from real-time evolution: given the ability to apply $e^{-\mathrm{i} Ht}$ for an unknown local Hamiltonian $H = \sum_{a = 1}^m \lambda_a E_a$ on $n$ qubits, the goal is to recover $H$.

An optimal tradeoff between entanglement and copy complexity for state tomography

no code implementations26 Feb 2024 Sitan Chen, Jerry Li, Allen Liu

In this work, we study tomography in the natural setting where one can make measurements of $t$ copies at a time.

Learning quantum Hamiltonians at any temperature in polynomial time

no code implementations3 Oct 2023 Ainesh Bakshi, Allen Liu, Ankur Moitra, Ewin Tang

Anshu, Arunachalam, Kuwahara, and Soleimanifar (arXiv:2004. 07266) gave an algorithm to learn a Hamiltonian on $n$ qubits to precision $\epsilon$ with only polynomially many copies of the Gibbs state, but which takes exponential time.

Matrix Completion in Almost-Verification Time

no code implementations7 Aug 2023 Jonathan A. Kelner, Jerry Li, Allen Liu, Aaron Sidford, Kevin Tian

In the well-studied setting where $\mathbf{M}$ has incoherent row and column spans, our algorithms complete $\mathbf{M}$ to high precision from $mr^{2+o(1)}$ observations in $mr^{3 + o(1)}$ time (omitting logarithmic factors in problem parameters), improving upon the prior state-of-the-art [JN15] which used $\approx mr^5$ samples and $\approx mr^7$ time.

Low-Rank Matrix Completion

Tensor Decompositions Meet Control Theory: Learning General Mixtures of Linear Dynamical Systems

no code implementations13 Jul 2023 Ainesh Bakshi, Allen Liu, Ankur Moitra, Morris Yau

In this work we give a new approach to learning mixtures of linear dynamical systems that is based on tensor decompositions.

Tensor Decomposition Time Series

A New Approach to Learning Linear Dynamical Systems

no code implementations23 Jan 2023 Ainesh Bakshi, Allen Liu, Ankur Moitra, Morris Yau

Linear dynamical systems are the foundational statistical model upon which control theory is built.

Minimax Rates for Robust Community Detection

no code implementations25 Jul 2022 Allen Liu, Ankur Moitra

In this work, we study the problem of community detection in the stochastic block model with adversarial node corruptions.

Community Detection Stochastic Block Model

When Does Adaptivity Help for Quantum State Learning?

no code implementations10 Jun 2022 Sitan Chen, Brice Huang, Jerry Li, Allen Liu, Mark Sellke

We give an adaptive algorithm that outputs a state which is $\gamma$-close in infidelity to $\rho$ using only $\tilde{O}(d^3/\gamma)$ copies, which is optimal for incoherent measurements.

Open-Ended Question Answering

Tight Bounds for Quantum State Certification with Incoherent Measurements

no code implementations14 Apr 2022 Sitan Chen, Brice Huang, Jerry Li, Allen Liu

When $\sigma$ is the maximally mixed state $\frac{1}{d} I_d$, this is known as mixedness testing.

Semi-Random Sparse Recovery in Nearly-Linear Time

no code implementations8 Mar 2022 Jonathan A. Kelner, Jerry Li, Allen Liu, Aaron Sidford, Kevin Tian

We design a new iterative method tailored to the geometry of sparse recovery which is provably robust to our semi-random model.

The Pareto Frontier of Instance-Dependent Guarantees in Multi-Player Multi-Armed Bandits with no Communication

no code implementations19 Feb 2022 Allen Liu, Mark Sellke

We ask whether it is possible to obtain optimal instance-dependent regret $\tilde{O}(1/\Delta)$ where $\Delta$ is the gap between the $m$-th and $m+1$-st best arms.

Multi-Armed Bandits

Robust Voting Rules from Algorithmic Robust Statistics

no code implementations13 Dec 2021 Allen Liu, Ankur Moitra

Maximum likelihood estimation furnishes powerful insights into voting theory, and the design of voting rules.

Clustering Mixtures with Almost Optimal Separation in Polynomial Time

no code implementations1 Dec 2021 Jerry Li, Allen Liu

We give the first algorithm which runs in polynomial time, and which almost matches this guarantee.

Clustering

Margin-Independent Online Multiclass Learning via Convex Geometry

no code implementations NeurIPS 2021 Guru Guruganesh, Allen Liu, Jon Schneider, Joshua Wang

We consider the problem of multi-class classification, where a stream of adversarially chosen queries arrive and must be assigned a label online.

Binary Classification Classification +1

Robust Model Selection and Nearly-Proper Learning for GMMs

no code implementations5 Jun 2021 Jerry Li, Allen Liu, Ankur Moitra

Given $\textsf{poly}(k/\epsilon)$ samples from a distribution that is $\epsilon$-close in TV distance to a GMM with $k$ components, we can construct a GMM with $\widetilde{O}(k)$ components that approximates the distribution to within $\widetilde{O}(\epsilon)$ in $\textsf{poly}(k/\epsilon)$ time.

Learning Theory Model Selection

Algorithms from Invariants: Smoothed Analysis of Orbit Recovery over $SO(3)$

no code implementations4 Jun 2021 Allen Liu, Ankur Moitra

Our main result is a quasi-polynomial time algorithm for orbit recovery over $SO(3)$ in this model.

Electron Tomography Tensor Decomposition

Learning GMMs with Nearly Optimal Robustness Guarantees

no code implementations19 Apr 2021 Allen Liu, Ankur Moitra

In this work we solve the problem of robustly learning a high-dimensional Gaussian mixture model with $k$ components from $\epsilon$-corrupted samples up to accuracy $\widetilde{O}(\epsilon)$ in total variation distance for any constant $k$ and with mild assumptions on the mixture.

Myersonian Regression

no code implementations NeurIPS 2020 Allen Liu, Renato Leme, Jon Schneider

Motivated by pricing applications in online advertising, we study a variant of linear regression with a discontinuous loss function that we term Myersonian regression.

regression

Settling the Robust Learnability of Mixtures of Gaussians

no code implementations6 Nov 2020 Allen Liu, Ankur Moitra

This work represents a natural coalescence of two important lines of work: learning mixtures of Gaussians and algorithmic robust statistics.

Tensor Completion Made Practical

no code implementations NeurIPS 2020 Allen Liu, Ankur Moitra

We show strong provable guarantees, including showing that our algorithm converges linearly to the true tensors even when the factors are highly correlated and can be implemented in nearly linear time.

Matrix Completion

Optimal Contextual Pricing and Extensions

no code implementations3 Mar 2020 Allen Liu, Renato Paes Leme, Jon Schneider

We provide a generic algorithm with $O(d^2)$ regret where $d$ is the covering dimension of this class.

Efficiently Learning Mixtures of Mallows Models

no code implementations17 Aug 2018 Allen Liu, Ankur Moitra

Mixtures of Mallows models are a popular generative model for ranking data coming from a heterogeneous population.

Recommendation Systems

Cannot find the paper you are looking for? You can Submit a new open access paper.