no code implementations • 4 Oct 2024 • Giulio Franzese, Mattia Martini, Giulio Corallo, Paolo Papotti, Pietro Michiardi
In this work we study how diffusion-based generative models produce high-dimensional data, such as an image, by implicitly relying on a manifestation of a low-dimensional set of latent abstractions, that guide the generative process.
1 code implementation • 21 Jun 2024 • Raphael Azorin, Zied Ben Houidi, Massimo Gallo, Alessandro Finamore, Pietro Michiardi
At first, rows (or columns) are encoded separately by computing attention between their fields.
no code implementations • 31 May 2024 • Chao Wang, Giulio Franzese, Alessandro Finamore, Massimo Gallo, Pietro Michiardi
In a nutshell, our method uses self-supervised fine-tuning and relies on point-wise mutual information between prompts and images to define a synthetic training set to induce model alignment.
2 code implementations • 8 Feb 2024 • Mustapha Bounoua, Giulio Franzese, Pietro Michiardi
The analysis of scientific data and complex multivariate systems requires information quantities that capture relationships among multiple random variables.
no code implementations • 19 Jan 2024 • Chao Wang, Alessandro Finamore, Pietro Michiardi, Massimo Gallo, Dario Rossi
Data Augmentation (DA) -- enriching training data by adding synthetic samples -- is a technique widely adopted in Computer Vision (CV) and Natural Language Processing (NLP) tasks to improve models performance.
no code implementations • 21 Oct 2023 • Chao Wang, Alessandro Finamore, Pietro Michiardi, Massimo Gallo, Dario Rossi
Data Augmentation (DA)-augmenting training data with synthetic samples-is wildly adopted in Computer Vision (CV) to improve models performance.
1 code implementation • 13 Oct 2023 • Giulio Franzese, Mustapha Bounoua, Pietro Michiardi
In this work we present a new method for the estimation of Mutual Information (MI) between random variables.
1 code implementation • 7 Jun 2023 • Mustapha Bounoua, Giulio Franzese, Pietro Michiardi
Multi-modal data-sets are ubiquitous in modern applications, and multi-modal Variational Autoencoders are a popular family of models that aim to learn a joint representation of the different modalities.
1 code implementation • NeurIPS 2023 • Ba-Hien Tran, Giulio Franzese, Pietro Michiardi, Maurizio Filippone
Generative Models (GMs) have attracted considerable attention due to their tremendous success in various domains, such as computer vision where they are capable to generate impressive realistic-looking images.
1 code implementation • NeurIPS 2023 • Giulio Franzese, Giulio Corallo, Simone Rossi, Markus Heinonen, Maurizio Filippone, Pietro Michiardi
We introduce Functional Diffusion Processes (FDPs), which generalize score-based diffusion models to infinite-dimensional function spaces.
Ranked #26 on Image Generation on CelebA 64x64
no code implementations • 7 Jan 2023 • Raphael Azorin, Massimo Gallo, Alessandro Finamore, Dario Rossi, Pietro Michiardi
Some tasks may benefit from being learned together while others may be detrimental to one another.
no code implementations • 10 Jun 2022 • Giulio Franzese, Simone Rossi, Lixuan Yang, Alessandro Finamore, Dario Rossi, Maurizio Filippone, Pietro Michiardi
Score-based diffusion models are a class of generative models whose dynamics is described by stochastic differential equations that map noise into data.
no code implementations • 13 Apr 2022 • Ugo Lecerf, Christelle Yemdji-Tchassi, Pietro Michiardi
When learning to act in a stochastic, partially observable environment, an intelligent agent should be prepared to anticipate a change in its belief of the environment state, and be capable of adapting its actions on-the-fly to changing conditions.
no code implementations • 11 Apr 2022 • Ugo Lecerf, Christelle Yemdji-Tchassi, Sébastien Aubert, Pietro Michiardi
When learning to behave in a stochastic environment where safety is critical, such as driving a vehicle in traffic, it is natural for human drivers to plan fallback strategies as a backup to use if ever there is an unexpected change in the environment.
no code implementations • 4 Apr 2022 • Julien Audibert, Pietro Michiardi, Frédéric Guyard, Sébastien Marti, Maria A. Zuluaga
In this work, we study the anomaly detection performance of sixteen conventional, machine learning-based and, deep neural network approaches on five real-world open datasets.
no code implementations • 21 Sep 2021 • Lucas Pascal, Pietro Michiardi, Xavier Bost, Benoit Huet, Maria A. Zuluaga
In Multi-Task Learning (MTL), it is a common practice to train multi-task networks by optimizing an objective function, which is a weighted average of the task-specific objective functions.
no code implementations • 30 Jun 2021 • Giulio Franzese, Dimitrios Milios, Maurizio Filippone, Pietro Michiardi
We revisit the theoretical properties of Hamiltonian stochastic differential equations (SDES) for Bayesian posterior sampling, and we study the two types of errors that arise from numerical SDE simulation: the discretization error and the error due to noisy gradient estimates in the context of data subsampling.
1 code implementation • NeurIPS 2021 • Ba-Hien Tran, Simone Rossi, Dimitrios Milios, Pietro Michiardi, Edwin V. Bonilla, Maurizio Filippone
We develop a novel method for carrying out model selection for Bayesian autoencoders (BAEs) by means of prior hyper-parameter optimization.
no code implementations • pproximateinference AABI Symposium 2021 • Dimitrios Milios, Pietro Michiardi, Maurizio Filippone
In this paper, we employ variational arguments to establish a connection between ensemble methods for Neural Networks and Bayesian inference.
no code implementations • 10 Nov 2020 • Gia-Lac Tran, Dimitrios Milios, Pietro Michiardi, Maurizio Filippone
In this work, we address one limitation of sparse GPs, which is due to the challenge in dealing with a large number of inducing variables without imposing a special structure on the inducing inputs.
no code implementations • 19 Oct 2020 • Graziano Mita, Maurizio Filippone, Pietro Michiardi
A large part of the literature on learning disentangled representations focuses on variational autoencoders (VAE).
2 code implementations • KDD 2020 • Julien Audibert, Pietro Michiardi, Frédéric Guyard, Sébastien Marti, Maria A. Zuluaga
Through a feasibility study using Orange's proprietary data we have been able to validate Orange's requirements on scalability, stability, robustness, training speed and high performance.
1 code implementation • 17 Jun 2020 • Lucas Pascal, Pietro Michiardi, Xavier Bost, Benoit Huet, Maria A. Zuluaga
Multi-task learning has gained popularity due to the advantages it provides with respect to resource usage and performance.
no code implementations • 9 Jun 2020 • Giulio Franzese, Rosa Candela, Dimitrios Milios, Maurizio Filippone, Pietro Michiardi
In this work we define a unified mathematical framework to deepen our understanding of the role of stochastic gradient (SG) noise on the behavior of Markov chain Monte Carlo sampling (SGMCMC) algorithms.
no code implementations • 8 Jun 2020 • Dimitrios Milios, Pietro Michiardi, Maurizio Filippone
In this paper, we employ variational arguments to establish a connection between ensemble methods for Neural Networks and Bayesian inference.
2 code implementations • 16 Mar 2020 • Rosa Candela, Pietro Michiardi, Maurizio Filippone, Maria A. Zuluaga
Accurate travel products price forecasting is a highly desired feature that allows customers to take informed decisions about purchases, and companies to build and offer attractive tour packages.
Applications
no code implementations • 15 Nov 2019 • Graziano Mita, Paolo Papotti, Maurizio Filippone, Pietro Michiardi
We present a novel method - LIBRE - to learn an interpretable classifier, which materializes as a set of Boolean rules.
no code implementations • 21 Oct 2019 • Rosa Candela, Giulio Franzese, Maurizio Filippone, Pietro Michiardi
Large scale machine learning is increasingly relying on distributed optimization, whereby several machines contribute to the training process of a statistical model.
no code implementations • 28 Feb 2019 • Rémi Domingues, Pietro Michiardi, Jérémie Barlet, Maurizio Filippone
The identification of anomalies in temporal data is a core component of numerous research areas such as intrusion detection, fault prevention, genomics and fraud detection.
no code implementations • 18 Oct 2018 • Simone Rossi, Pietro Michiardi, Maurizio Filippone
Stochastic variational inference is an established way to carry out approximate Bayesian inference for deep models.
1 code implementation • NeurIPS 2018 • Dimitrios Milios, Raffaello Camoriano, Pietro Michiardi, Lorenzo Rosasco, Maurizio Filippone
In this paper, we study the problem of deriving fast and accurate classification algorithms with uncertainty quantification.
1 code implementation • 26 May 2018 • Gia-Lac Tran, Edwin V. Bonilla, John P. Cunningham, Pietro Michiardi, Maurizio Filippone
The wide adoption of Convolutional Neural Networks (CNNs) in applications where decision-making under uncertainty is fundamental, has brought a great deal of attention to the ability of these models to accurately quantify the uncertainty in their predictions.
1 code implementation • 29 Nov 2016 • Francesco Pace, Daniele Venzano, Damiano Carra, Pietro Michiardi
This work addresses the problem of scheduling user-defined analytic applications, which we define as high-level compositions of frameworks, their components, and the logic necessary to carry out work.
Distributed, Parallel, and Cluster Computing
1 code implementation • ICML 2017 • Kurt Cutajar, Edwin V. Bonilla, Pietro Michiardi, Maurizio Filippone
The composition of multiple Gaussian Processes as a Deep Gaussian Process (DGP) enables a deep probabilistic nonparametric approach to flexibly tackle complex machine learning problems with sound quantification of uncertainty.
2 code implementations • 20 Jul 2009 • Matteo Dell'Amico, Pietro Michiardi, Yves Roudier
We present an in-depth analysis on the strength of the almost 10, 000 passwords from users of an instant messaging server in Italy.
Cryptography and Security