You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • 10 Jun 2022 • Sitan Chen, Brice Huang, Jerry Li, Allen Liu, Mark Sellke

We consider the classic question of state tomography: given copies of an unknown quantum state $\rho\in\mathbb{C}^{d\times d}$, output $\widehat{\rho}$ for which $\|\rho - \widehat{\rho}\|_{\mathsf{tr}} \le \varepsilon$.

no code implementations • 31 May 2022 • Sitan Chen, Jerry Li, Yuanzhi Li

Motivated by the recent empirical successes of deep generative models, we study the computational complexity of the following unsupervised learning problem.

no code implementations • 14 Apr 2022 • Sitan Chen, Brice Huang, Jerry Li, Allen Liu

When $\sigma$ is the maximally mixed state $\frac{1}{d} I_d$, this is known as mixedness testing.

no code implementations • 8 Apr 2022 • Sitan Chen, Jerry Li, Yuanzhi Li, Anru R. Zhang

Our first main result is a polynomial-time algorithm for learning quadratic transformations of Gaussians in a smoothed setting.

no code implementations • 10 Feb 2022 • Sitan Chen, Aravind Gollakota, Adam R. Klivans, Raghu Meka

We give superpolynomial statistical query (SQ) lower bounds for learning two-hidden-layer ReLU networks with respect to Gaussian inputs in the standard (noise-free) model.

no code implementations • ICLR 2022 • Sitan Chen, Jerry Li, Yuanzhi Li, Raghu Meka

Arguably the most fundamental question in the theory of generative adversarial networks (GANs) is to understand to what extent GANs can actually learn the underlying distribution.

1 code implementation • 1 Dec 2021 • Hsin-Yuan Huang, Michael Broughton, Jordan Cotler, Sitan Chen, Jerry Li, Masoud Mohseni, Hartmut Neven, Ryan Babbush, Richard Kueng, John Preskill, Jarrod R. McClean

Quantum technology has the potential to revolutionize how we acquire and process experimental data to learn about the physical world.

no code implementations • NeurIPS 2021 • Sitan Chen, Adam Klivans, Raghu Meka

While the problem of PAC learning neural networks from samples has received considerable attention in recent years, in certain settings like model extraction attacks, it is reasonable to imagine having more than just the ability to observe random labeled examples.

no code implementations • 11 Nov 2021 • Sitan Chen, Frederic Koehler, Ankur Moitra, Morris Yau

In a pioneering work, Schick and Mitter gave provable guarantees when the measurement noise is a known infinitesimal perturbation of a Gaussian and raised the important question of whether one can get similar guarantees for large and unknown perturbations.

no code implementations • 10 Nov 2021 • Sitan Chen, Jordan Cotler, Hsin-Yuan Huang, Jerry Li

We prove that given the ability to make entangled measurements on at most $k$ replicas of an $n$-qubit state $\rho$ simultaneously, there is a property of $\rho$ which requires at least order $2^n$ measurements to learn.

no code implementations • 10 Nov 2021 • Sitan Chen, Jordan Cotler, Hsin-Yuan Huang, Jerry Li

We study the power of quantum memory for learning properties of quantum systems and dynamics, which is of great importance in physics and chemistry.

no code implementations • 8 Nov 2021 • Sitan Chen, Adam R Klivans, Raghu Meka

In this work we give the first polynomial-time algorithm for learning arbitrary one hidden layer neural networks activations provided black-box access to the network.

no code implementations • 25 Feb 2021 • Sitan Chen, Jerry Li, Ryan O'Donnell

We revisit the basic problem of quantum state certification: given copies of unknown mixed state $\rho\in\mathbb{C}^{d\times d}$ and the description of a mixed state $\sigma$, decide whether $\sigma = \rho$ or $\|\sigma - \rho\|_{\mathsf{tr}} \ge \epsilon$.

no code implementations • 2 Feb 2021 • Sitan Chen, Zhao Song, Runzhou Tao, Ruizhe Zhang

As this problem is hard in the worst-case, we study a natural average-case variant that arises in the context of these reconstruction attacks: $\mathbf{M} = \mathbf{W}\mathbf{W}^{\top}$ for $\mathbf{W}$ a random Boolean matrix with $k$-sparse rows, and the goal is to recover $\mathbf{W}$ up to column permutation.

no code implementations • ICLR 2021 • Sitan Chen, Xiaoxiao Li, Zhao Song, Danyang Zhuo

In this work, we examine the security of InstaHide, a scheme recently proposed by \cite{hsla20} for preserving the security of private datasets in the context of distributed learning.

no code implementations • NeurIPS 2020 • Sitan Chen, Frederic Koehler, Ankur Moitra, Morris Yau

In this paper, we revisit the problem of distribution-independently learning halfspaces under Massart noise with rate $\eta$.

no code implementations • 23 Nov 2020 • Sitan Chen, Xiaoxiao Li, Zhao Song, Danyang Zhuo

In this work, we examine the security of InstaHide, a scheme recently proposed by [Huang, Song, Li and Arora, ICML'20] for preserving the security of private datasets in the context of distributed learning.

no code implementations • 8 Oct 2020 • Sitan Chen, Frederic Koehler, Ankur Moitra, Morris Yau

Our approach is based on a novel alternating minimization scheme that interleaves ordinary least-squares with a simple convex program that finds the optimal reweighting of the distribution under a spectral constraint.

no code implementations • 28 Sep 2020 • Sitan Chen, Adam R. Klivans, Raghu Meka

These results provably cannot be obtained using gradient-based methods and give the first example of a class of efficiently learnable neural networks that gradient descent will fail to learn.

1 code implementation • 8 Jun 2020 • Sitan Chen, Frederic Koehler, Ankur Moitra, Morris Yau

In particular, we study the problem of learning halfspaces under Massart noise with rate $\eta$.

no code implementations • 28 Apr 2020 • Sitan Chen, Raghu Meka

We give an algorithm that learns the polynomial within accuracy $\epsilon$ with sample complexity that is roughly $N = O_{r, d}(n \log^2(1/\epsilon) (\log n)^d)$ and runtime $O_{r, d}(N n^2)$.

1 code implementation • NeurIPS 2020 • Sitan Chen, Jerry Li, Ankur Moitra

We revisit the problem of learning from untrusted batches introduced by Qiao and Valiant [QV17].

no code implementations • 16 Dec 2019 • Sitan Chen, Jerry Li, Zhao Song

In this paper, we give the first algorithm for learning an MLR that runs in time which is sub-exponential in $k$.

no code implementations • 5 Nov 2019 • Sitan Chen, Jerry Li, Ankur Moitra

When $k = 1$ this is the standard robust univariate density estimation setting and it is well-understood that $\Omega (\epsilon)$ error is unavoidable.

no code implementations • 17 Mar 2018 • Sitan Chen, Ankur Moitra

In contrast, as we will show, mixtures of $k$ subcubes are uniquely determined by their degree $2 \log k$ moments and hence provide a useful abstraction for simultaneously achieving the polynomial dependence on $1/\epsilon$ of the classic Occam algorithms for decision trees and the flexibility of the low-degree algorithm in being able to accommodate stochastic transitions.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.