no code implementations • 23 Aug 2022 • Supanut Thanasilp, Samson Wang, M. Cerezo, Zoë Holmes
Lastly, we show that when dealing with classical data, training a parametrized data embedding with a kernel alignment method is also susceptible to exponential concentration.
no code implementations • 27 Oct 2021 • Supanut Thanasilp, Samson Wang, Nhat A. Nghiem, Patrick J. Coles, M. Cerezo
In this work we bridge the two frameworks and show that gradient scaling results for VQAs can also be applied to study the gradient scaling of QML models.
no code implementations • 2 Sep 2021 • Samson Wang, Piotr Czarnik, Andrew Arrasmith, M. Cerezo, Lukasz Cincio, Patrick J. Coles
On the other hand, our positive results for CDR highlight the possibility of engineering error mitigation methods to improve trainability.
1 code implementation • 5 Nov 2020 • Arthur Pesah, M. Cerezo, Samson Wang, Tyler Volkoff, Andrew T. Sornborger, Patrick J. Coles
To derive our results we introduce a novel graph-based method to analyze expectation values over Haar-distributed unitaries, which will likely be useful in other contexts.
no code implementations • 28 Jul 2020 • Samson Wang, Enrico Fontana, M. Cerezo, Kunal Sharma, Akira Sone, Lukasz Cincio, Patrick J. Coles
Specifically, for the local Pauli noise considered, we prove that the gradient vanishes exponentially in the number of qubits $n$ if the depth of the ansatz grows linearly with $n$.