no code implementations • 13 Feb 2024 • Filippo Girardi, Giacomo De Palma
We prove that, as long as the network is not affected by barren plateaus, the trained network can perfectly fit the training set and that the probability distribution of the function generated after training still converges in distribution to a Gaussian process.
no code implementations • 23 Sep 2021 • Grecia Castelazo, Quynh T. Nguyen, Giacomo De Palma, Dirk Englund, Seth Lloyd, Bobak T. Kiani
Group convolutions and cross-correlations, which are equivariant to the actions of group elements, are commonly used in mathematics to analyze or take advantage of symmetries inherent in a given problem setting.
2 code implementations • 8 Jan 2021 • Bobak Toussi Kiani, Giacomo De Palma, Milad Marvian, Zi-Wen Liu, Seth Lloyd
Quantifying how far the output of a learning algorithm is from its target is an essential task in machine learning.
1 code implementation • 13 Apr 2020 • Giacomo De Palma, Bobak T. Kiani, Seth Lloyd
We explore the properties of adversarial examples for deep neural networks with random weights and biases, and prove that for any $p\ge1$, the $\ell^p$ distance of any given input from the classification boundary scales as one over the square root of the dimension of the input times the $\ell^p$ norm of the input.
no code implementations • 3 Nov 2019 • Giacomo De Palma, Dario Trevisan
We propose a new generalization to quantum states of the Wasserstein distance, which is a fundamental distance between probability distributions given by the minimization of a transport cost.
Mathematical Physics Functional Analysis Mathematical Physics Probability Quantum Physics
1 code implementation • NeurIPS 2019 • Giacomo De Palma, Bobak Toussi Kiani, Seth Lloyd
We prove that the binary classifiers of bit strings generated by random wide deep neural networks with ReLU activation function are biased towards simple functions.