no code implementations • 18 Mar 2025 • Chao Wang, Giulio Franzese, Alessandro Finamore, Pietro Michiardi
First, we introduce RFMI, a novel Mutual Information (MI) estimator for RF models that uses the pre-trained model itself for the MI estimation.
no code implementations • 26 Feb 2025 • Alberto Foresti, Giulio Franzese, Pietro Michiardi
In this work, we introduce INFO-SEDD, a novel method for estimating information-theoretic quantities of discrete data, including mutual information and entropy.
1 code implementation • 5 Feb 2025 • Simone Clemente, Zied Ben Houidi, Alexis Huet, Dario Rossi, Giulio Franzese, Pietro Michiardi
Unlike humans, who naturally resist contradictory information, these models indiscriminately accept contradictions, leading to devastating interference, destroying up to 80% of unrelated knowledge even when learning as few as 10-100 contradicting facts.
no code implementations • 4 Oct 2024 • Giulio Franzese, Mattia Martini, Giulio Corallo, Paolo Papotti, Pietro Michiardi
In this work we study how diffusion-based generative models produce high-dimensional data, such as an image, by implicitly relying on a manifestation of a low-dimensional set of latent abstractions, that guide the generative process.
no code implementations • 31 May 2024 • Chao Wang, Giulio Franzese, Alessandro Finamore, Massimo Gallo, Pietro Michiardi
Diffusion models for Text-to-Image (T2I) conditional generation have recently achieved tremendous success.
1 code implementation • 8 Feb 2024 • Mustapha Bounoua, Giulio Franzese, Pietro Michiardi
The analysis of scientific data and complex multivariate systems requires information quantities that capture relationships among multiple random variables.
1 code implementation • 13 Oct 2023 • Giulio Franzese, Mustapha Bounoua, Pietro Michiardi
In this work we present a new method for the estimation of Mutual Information (MI) between random variables.
1 code implementation • 7 Jun 2023 • Mustapha Bounoua, Giulio Franzese, Pietro Michiardi
Multi-modal data-sets are ubiquitous in modern applications, and multi-modal Variational Autoencoders are a popular family of models that aim to learn a joint representation of the different modalities.
1 code implementation • NeurIPS 2023 • Ba-Hien Tran, Giulio Franzese, Pietro Michiardi, Maurizio Filippone
Generative Models (GMs) have attracted considerable attention due to their tremendous success in various domains, such as computer vision where they are capable to generate impressive realistic-looking images.
1 code implementation • NeurIPS 2023 • Giulio Franzese, Giulio Corallo, Simone Rossi, Markus Heinonen, Maurizio Filippone, Pietro Michiardi
We introduce Functional Diffusion Processes (FDPs), which generalize score-based diffusion models to infinite-dimensional function spaces.
Ranked #26 on
Image Generation
on CelebA 64x64
no code implementations • 10 Jun 2022 • Giulio Franzese, Simone Rossi, Lixuan Yang, Alessandro Finamore, Dario Rossi, Maurizio Filippone, Pietro Michiardi
Score-based diffusion models are a class of generative models whose dynamics is described by stochastic differential equations that map noise into data.
no code implementations • 30 Jun 2021 • Giulio Franzese, Dimitrios Milios, Maurizio Filippone, Pietro Michiardi
We revisit the theoretical properties of Hamiltonian stochastic differential equations (SDES) for Bayesian posterior sampling, and we study the two types of errors that arise from numerical SDE simulation: the discretization error and the error due to noisy gradient estimates in the context of data subsampling.
no code implementations • 9 Jun 2020 • Giulio Franzese, Rosa Candela, Dimitrios Milios, Maurizio Filippone, Pietro Michiardi
In this work we define a unified mathematical framework to deepen our understanding of the role of stochastic gradient (SG) noise on the behavior of Markov chain Monte Carlo sampling (SGMCMC) algorithms.
no code implementations • 21 Oct 2019 • Rosa Candela, Giulio Franzese, Maurizio Filippone, Pietro Michiardi
Large scale machine learning is increasingly relying on distributed optimization, whereby several machines contribute to the training process of a statistical model.
no code implementations • 6 Mar 2018 • Giulio Franzese, Monica Visintin
We describe a novel classifier with a tree structure, designed using information theory concepts.