Search Results for author: Giulio Franzese

Found 7 papers, 0 papers with code

MINDE: Mutual Information Neural Diffusion Estimation

no code implementations13 Oct 2023 Giulio Franzese, Mustapha Bounoua, Pietro Michiardi

In this work we present a new method for the estimation of Mutual Information (MI) between random variables.

Multi-modal Latent Diffusion

no code implementations7 Jun 2023 Mustapha Bounoua, Giulio Franzese, Pietro Michiardi

Multi-modal data-sets are ubiquitous in modern applications, and multi-modal Variational Autoencoders are a popular family of models that aim to learn a joint representation of the different modalities.

How Much is Enough? A Study on Diffusion Times in Score-based Generative Models

no code implementations10 Jun 2022 Giulio Franzese, Simone Rossi, Lixuan Yang, Alessandro Finamore, Dario Rossi, Maurizio Filippone, Pietro Michiardi

Score-based diffusion models are a class of generative models whose dynamics is described by stochastic differential equations that map noise into data.

Revisiting the Effects of Stochasticity for Hamiltonian Samplers

no code implementations30 Jun 2021 Giulio Franzese, Dimitrios Milios, Maurizio Filippone, Pietro Michiardi

We revisit the theoretical properties of Hamiltonian stochastic differential equations (SDES) for Bayesian posterior sampling, and we study the two types of errors that arise from numerical SDE simulation: the discretization error and the error due to noisy gradient estimates in the context of data subsampling.

Numerical Integration

Isotropic SGD: a Practical Approach to Bayesian Posterior Sampling

no code implementations9 Jun 2020 Giulio Franzese, Rosa Candela, Dimitrios Milios, Maurizio Filippone, Pietro Michiardi

In this work we define a unified mathematical framework to deepen our understanding of the role of stochastic gradient (SG) noise on the behavior of Markov chain Monte Carlo sampling (SGMCMC) algorithms.

Sparsification as a Remedy for Staleness in Distributed Asynchronous SGD

no code implementations21 Oct 2019 Rosa Candela, Giulio Franzese, Maurizio Filippone, Pietro Michiardi

Large scale machine learning is increasingly relying on distributed optimization, whereby several machines contribute to the training process of a statistical model.

Distributed Optimization

Deep Information Networks

no code implementations6 Mar 2018 Giulio Franzese, Monica Visintin

We describe a novel classifier with a tree structure, designed using information theory concepts.

Cannot find the paper you are looking for? You can Submit a new open access paper.