Paper

Information-theoretic bounds on quantum advantage in machine learning

We study the performance of classical and quantum machine learning (ML) models in predicting outcomes of physical experiments. The experiments depend on an input parameter $x$ and involve execution of a (possibly unknown) quantum process $\mathcal{E}$. Our figure of merit is the number of runs of $\mathcal{E}$ required to achieve a desired prediction performance. We consider classical ML models that perform a measurement and record the classical outcome after each run of $\mathcal{E}$, and quantum ML models that can access $\mathcal{E}$ coherently to acquire quantum data; the classical or quantum data is then used to predict outcomes of future experiments. We prove that for any input distribution $\mathcal{D}(x)$, a classical ML model can provide accurate predictions on average by accessing $\mathcal{E}$ a number of times comparable to the optimal quantum ML model. In contrast, for achieving accurate prediction on all inputs, we prove that exponential quantum advantage is possible. For example, to predict expectations of all Pauli observables in an $n$-qubit system $\rho$, classical ML models require $2^{\Omega(n)}$ copies of $\rho$, but we present a quantum ML model using only $\mathcal{O}(n)$ copies. Our results clarify where quantum advantage is possible and highlight the potential for classical ML models to address challenging quantum problems in physics and chemistry.

Results in Papers With Code
(↓ scroll down to see all results)