Search Results for author: Stefano Peluchetti

Found 13 papers, 2 papers with code

Non-Denoising Forward-Time Diffusions

no code implementations22 Dec 2023 Stefano Peluchetti

The scope of this paper is generative modeling through diffusion processes.

Denoising

Diffusion Bridge Mixture Transports, Schrödinger Bridge Problems and Generative Modeling

1 code implementation3 Apr 2023 Stefano Peluchetti

The dynamic Schr\"odinger bridge problem seeks a stochastic process that defines a transport between two target probability measures, while optimally satisfying the criteria of being closest, in terms of Kullback-Leibler divergence, to a reference process.

valid

Large-width asymptotics for ReLU neural networks with $α$-Stable initializations

no code implementations16 Jun 2022 Stefano Favaro, Sandra Fortini, Stefano Peluchetti

As a difference with respect to the Gaussian setting, our result shows that the choice of the activation function affects the scaling of the NN, that is: to achieve the infinitely wide $\alpha$-Stable process, the ReLU activation requires an additional logarithmic term in the scaling with respect to sub-linear activations.

regression

Deep Stable neural networks: large-width asymptotics and convergence rates

no code implementations2 Aug 2021 Stefano Favaro, Sandra Fortini, Stefano Peluchetti

Then, we establish sup-norm convergence rates of the rescaled deep Stable NN to the Stable SP, under the ``joint growth" and a ``sequential growth" of the width over the NN's layers.

Bayesian Inference

Learning Stochastic Optimal Policies via Gradient Descent

no code implementations7 Jun 2021 Stefano Massaroli, Michael Poli, Stefano Peluchetti, Jinkyoo Park, Atsushi Yamashita, Hajime Asama

We systematically develop a learning-based treatment of stochastic optimal control (SOC), relying on direct optimization of parametric control policies.

Portfolio Optimization

Large-width functional asymptotics for deep Gaussian neural networks

no code implementations ICLR 2021 Daniele Bracale, Stefano Favaro, Sandra Fortini, Stefano Peluchetti

In this paper, we consider fully connected feed-forward deep neural networks where weights and biases are independent and identically distributed according to Gaussian distributions.

Gaussian Processes

Learning-augmented count-min sketches via Bayesian nonparametrics

no code implementations8 Feb 2021 Emanuele Dolera, Stefano Favaro, Stefano Peluchetti

Under this more general framework, we apply the arguments of the ``Bayesian" proof of the CMS-DP, suitably adapted to the PYP prior, in order to compute the posterior distribution of a point query, given the hashed data.

A Bayesian nonparametric approach to count-min sketch under power-law data streams

no code implementations7 Feb 2021 Emanuele Dolera, Stefano Favaro, Stefano Peluchetti

The count-min sketch (CMS) is a randomized data structure that provides estimates of tokens' frequencies in a large data stream using a compressed representation of the data by random hashing.

Infinite-channel deep stable convolutional neural networks

no code implementations7 Feb 2021 Daniele Bracale, Stefano Favaro, Sandra Fortini, Stefano Peluchetti

The interplay between infinite-width neural networks (NNs) and classes of Gaussian processes (GPs) is well known since the seminal work of Neal (1996).

Gaussian Processes

Doubly infinite residual neural networks: a diffusion process approach

no code implementations7 Jul 2020 Stefano Peluchetti, Stefano Favaro

Our results highlight a limited expressive power of doubly infinite ResNets when the unscaled network's parameters are i. i. d.

Gaussian Processes

Stable behaviour of infinitely wide deep neural networks

1 code implementation1 Mar 2020 Stefano Favaro, Sandra Fortini, Stefano Peluchetti

We consider fully connected feed-forward deep neural networks (NNs) where weights and biases are independent and identically distributed as symmetric centered stable distributions.

Gaussian Processes

An empirical study of pretrained representations for few-shot classification

no code implementations3 Oct 2019 Tiago Ramalho, Thierry Sousbie, Stefano Peluchetti

Recent algorithms with state-of-the-art few-shot classification results start their procedure by computing data features output by a large pretrained model.

Adversarial Robustness Classification +2

Infinitely deep neural networks as diffusion processes

no code implementations27 May 2019 Stefano Peluchetti, Stefano Favaro

When the parameters are independently and identically distributed (initialized) neural networks exhibit undesirable properties that emerge as the number of layers increases, e. g. a vanishing dependency on the input and a concentration on restrictive families of functions including constant functions.

Cannot find the paper you are looking for? You can Submit a new open access paper.