Search Results for author: Giacomo De Palma

Found 6 papers, 3 papers with code

Trained quantum neural networks are Gaussian processes

no code implementations13 Feb 2024 Filippo Girardi, Giacomo De Palma

We prove that, as long as the network is not affected by barren plateaus, the trained network can perfectly fit the training set and that the probability distribution of the function generated after training still converges in distribution to a Gaussian process.

Gaussian Processes

Quantum algorithms for group convolution, cross-correlation, and equivariant transformations

no code implementations23 Sep 2021 Grecia Castelazo, Quynh T. Nguyen, Giacomo De Palma, Dirk Englund, Seth Lloyd, Bobak T. Kiani

Group convolutions and cross-correlations, which are equivariant to the actions of group elements, are commonly used in mathematics to analyze or take advantage of symmetries inherent in a given problem setting.

Learning quantum data with the quantum Earth Mover's distance

2 code implementations8 Jan 2021 Bobak Toussi Kiani, Giacomo De Palma, Milad Marvian, Zi-Wen Liu, Seth Lloyd

Quantifying how far the output of a learning algorithm is from its target is an essential task in machine learning.

Generative Adversarial Network

Adversarial Robustness Guarantees for Random Deep Neural Networks

1 code implementation13 Apr 2020 Giacomo De Palma, Bobak T. Kiani, Seth Lloyd

We explore the properties of adversarial examples for deep neural networks with random weights and biases, and prove that for any $p\ge1$, the $\ell^p$ distance of any given input from the classification boundary scales as one over the square root of the dimension of the input times the $\ell^p$ norm of the input.

Adversarial Robustness Gaussian Processes

Quantum optimal transport with quantum channels

no code implementations3 Nov 2019 Giacomo De Palma, Dario Trevisan

We propose a new generalization to quantum states of the Wasserstein distance, which is a fundamental distance between probability distributions given by the minimization of a transport cost.

Mathematical Physics Functional Analysis Mathematical Physics Probability Quantum Physics

Random deep neural networks are biased towards simple functions

1 code implementation NeurIPS 2019 Giacomo De Palma, Bobak Toussi Kiani, Seth Lloyd

We prove that the binary classifiers of bit strings generated by random wide deep neural networks with ReLU activation function are biased towards simple functions.

General Classification Generalization Bounds

Cannot find the paper you are looking for? You can Submit a new open access paper.