Search Results for author: Samson Wang

Found 5 papers, 1 papers with code

Exponential concentration in quantum kernel methods

no code implementations23 Aug 2022 Supanut Thanasilp, Samson Wang, M. Cerezo, Zoë Holmes

Lastly, we show that when dealing with classical data, training a parametrized data embedding with a kernel alignment method is also susceptible to exponential concentration.

Quantum Machine Learning

Subtleties in the trainability of quantum machine learning models

no code implementations27 Oct 2021 Supanut Thanasilp, Samson Wang, Nhat A. Nghiem, Patrick J. Coles, M. Cerezo

In this work we bridge the two frameworks and show that gradient scaling results for VQAs can also be applied to study the gradient scaling of QML models.

BIG-bench Machine Learning Quantum Machine Learning +1

Can Error Mitigation Improve Trainability of Noisy Variational Quantum Algorithms?

no code implementations2 Sep 2021 Samson Wang, Piotr Czarnik, Andrew Arrasmith, M. Cerezo, Lukasz Cincio, Patrick J. Coles

On the other hand, our positive results for CDR highlight the possibility of engineering error mitigation methods to improve trainability.

regression

Absence of Barren Plateaus in Quantum Convolutional Neural Networks

1 code implementation5 Nov 2020 Arthur Pesah, M. Cerezo, Samson Wang, Tyler Volkoff, Andrew T. Sornborger, Patrick J. Coles

To derive our results we introduce a novel graph-based method to analyze expectation values over Haar-distributed unitaries, which will likely be useful in other contexts.

Noise-Induced Barren Plateaus in Variational Quantum Algorithms

no code implementations28 Jul 2020 Samson Wang, Enrico Fontana, M. Cerezo, Kunal Sharma, Akira Sone, Lukasz Cincio, Patrick J. Coles

Specifically, for the local Pauli noise considered, we prove that the gradient vanishes exponentially in the number of qubits $n$ if the depth of the ansatz grows linearly with $n$.

Visual Question Answering (VQA)

Cannot find the paper you are looking for? You can Submit a new open access paper.