no code implementations • 14 Dec 2023 • M. Cerezo, Martin Larocca, Diego García-Martín, N. L. Diaz, Paolo Braccia, Enrico Fontana, Manuel S. Rudolph, Pablo Bermejo, Aroosa Ijaz, Supanut Thanasilp, Eric R. Anschuetz, Zoë Holmes
A large amount of effort has recently been put into understanding the barren plateau phenomenon.
no code implementations • 17 May 2023 • Diego García-Martín, Martin Larocca, M. Cerezo
It is well known that artificial neural networks initialized from independent and identically distributed priors converge to Gaussian processes in the limit of large number of neurons per hidden layer.
no code implementations • 1 Mar 2023 • Sujay Kazi, Martin Larocca, M. Cerezo
Our results show that if the QNN is generated by one- and two-body $S_n$-equivariant gates, the QNN is semi-universal but not universal.
no code implementations • 10 Feb 2023 • Diego García-Martín, Martin Larocca, M. Cerezo
In particular, it has been proposed that a QNN can be defined as overparametrized if it has enough parameters to explore all available directions in state space.
no code implementations • 18 Oct 2022 • Louis Schatzki, Martin Larocca, Quynh T. Nguyen, Frederic Sauvage, M. Cerezo
Despite the great promise of quantum machine learning models, there are several challenges one must overcome before unlocking their full potential.
no code implementations • 16 Oct 2022 • Quynh T. Nguyen, Louis Schatzki, Paolo Braccia, Michael Ragone, Patrick J. Coles, Frederic Sauvage, Martin Larocca, M. Cerezo
Most currently used quantum neural network architectures have little-to-no inductive biases, leading to trainability and generalization issues.
no code implementations • 14 Oct 2022 • Michael Ragone, Paolo Braccia, Quynh T. Nguyen, Louis Schatzki, Patrick J. Coles, Frederic Sauvage, Martin Larocca, M. Cerezo
Recent advances in classical machine learning have shown that creating models with inductive biases encoding the symmetries of a problem can greatly improve performance.
no code implementations • 4 May 2022 • Martin Larocca, Frederic Sauvage, Faris M. Sbahi, Guillaume Verdon, Patrick J. Coles, M. Cerezo
We present theoretical results underpinning the design of $\mathfrak{G}$-invariant models, and exemplify their application through several paradigmatic QML classification tasks including cases when $\mathfrak{G}$ is a continuous Lie group and also when it is a discrete symmetry group.
no code implementations • 23 Sep 2021 • Martin Larocca, Nathan Ju, Diego García-Martín, Patrick J. Coles, M. Cerezo
The prospect of achieving quantum advantage with Quantum Neural Networks (QNNs) is exciting.