1 code implementation • 14 Feb 2025 • Abdelhakim Benechehab, Vasilii Feofanov, Giuseppe Paolo, Albert Thomas, Maurizio Filippone, Balázs Kégl
This study aims to tackle these critical limitations by introducing adapters; feature-space transformations that facilitate the effective use of pre-trained univariate time series FMs for multivariate tasks.
Multivariate Time Series Forecasting
Representation Learning
+3
1 code implementation • 15 Oct 2024 • Abdelhakim Benechehab, Youssef Attia El Hili, Ambroise Odonnat, Oussama Zekri, Albert Thomas, Giuseppe Paolo, Maurizio Filippone, Ievgen Redko, Balázs Kégl
The emerging zero-shot capabilities of Large Language Models (LLMs) have led to their applications in areas extending well beyond natural language processing tasks.
no code implementations • 3 Jun 2024 • Markus Heinonen, Ba-Hien Tran, Michael Kampffmeyer, Maurizio Filippone
Introducing training-time augmentations is a key technique to enhance generalization and prepare deep neural networks against test-time corruptions.
no code implementations • 5 Feb 2024 • Abdelhakim Benechehab, Albert Thomas, Giuseppe Paolo, Maurizio Filippone, Balázs Kégl
In model-based reinforcement learning, most algorithms rely on simulating trajectories from one-step models of the dynamics learned on data.
no code implementations • 4 Feb 2024 • Edwin V. Bonilla, Pantelis Elinas, He Zhao, Maurizio Filippone, Vassili Kitsios, Terry O'Kane
Estimating the structure of a Bayesian network, in the form of a directed acyclic graph (DAG), from observational data is a statistically and computationally hard problem with essential applications in areas such as causal discovery.
no code implementations • 1 Feb 2024 • Theodore Papamarkou, Maria Skoularidou, Konstantina Palla, Laurence Aitchison, Julyan Arbel, David Dunson, Maurizio Filippone, Vincent Fortuin, Philipp Hennig, José Miguel Hernández-Lobato, Aliaksandr Hubin, Alexander Immer, Theofanis Karaletsos, Mohammad Emtiyaz Khan, Agustinus Kristiadi, Yingzhen Li, Stephan Mandt, Christopher Nemeth, Michael A. Osborne, Tim G. J. Rudner, David Rügamer, Yee Whye Teh, Max Welling, Andrew Gordon Wilson, Ruqi Zhang
In the current landscape of deep learning research, there is a predominant emphasis on achieving high predictive accuracy in supervised tasks involving large image and language datasets.
1 code implementation • 16 Nov 2023 • Andrew Zammit-Mangion, Michael D. Kaminski, Ba-Hien Tran, Maurizio Filippone, Noel Cressie
We propose several variants of SBNNs, most of which are able to match the finite-dimensional distribution of the target process at the selected grid better than conventional BNNs of similar complexity.
no code implementations • 9 Oct 2023 • Abdelhakim Benechehab, Giuseppe Paolo, Albert Thomas, Maurizio Filippone, Balázs Kégl
In model-based reinforcement learning (MBRL), most algorithms rely on simulating trajectories from one-step dynamics models learned on data.
1 code implementation • NeurIPS 2023 • Ba-Hien Tran, Giulio Franzese, Pietro Michiardi, Maurizio Filippone
Generative Models (GMs) have attracted considerable attention due to their tremendous success in various domains, such as computer vision where they are capable to generate impressive realistic-looking images.
no code implementations • 7 Mar 2023 • Davit Gogolashvili, Matteo Zecchin, Motonobu Kanagawa, Marios Kountouris, Maurizio Filippone
Classic results show that the IW correction is needed when the model is parametric and misspecified.
1 code implementation • NeurIPS 2023 • Giulio Franzese, Giulio Corallo, Simone Rossi, Markus Heinonen, Maurizio Filippone, Pietro Michiardi
We introduce Functional Diffusion Processes (FDPs), which generalize score-based diffusion models to infinite-dimensional function spaces.
Ranked #26 on
Image Generation
on CelebA 64x64
no code implementations • 9 Feb 2023 • Ba-Hien Tran, Babak Shahbaba, Stephan Mandt, Maurizio Filippone
Autoencoders and their variants are among the most widely used models in representation learning and generative modeling.
no code implementations • 18 Oct 2022 • Davit Gogolashvili, Bogdan Kozyrskiy, Maurizio Filippone
We develop a novel framework to accelerate Gaussian process regression (GPR).
no code implementations • 10 Jun 2022 • Giulio Franzese, Simone Rossi, Lixuan Yang, Alessandro Finamore, Dario Rossi, Maurizio Filippone, Pietro Michiardi
Score-based diffusion models are a class of generative models whose dynamics is described by stochastic differential equations that map noise into data.
1 code implementation • 12 Apr 2022 • Jonas Wacker, Maurizio Filippone
A fundamental drawback of kernel-based statistical models is their limited scalability to large data sets, which requires resorting to approximations.
1 code implementation • 4 Feb 2022 • Jonas Wacker, Ruben Ohana, Maurizio Filippone
Commonly used approaches avoid computing the high-dimensional tensor product explicitly, resulting in a suboptimal dependence of $\mathcal{O}(3^p)$ in the embedding dimension.
1 code implementation • 21 Jan 2022 • Jonas Wacker, Motonobu Kanagawa, Maurizio Filippone
These variance formulas elucidate conditions under which certain approximations (e. g., TensorSRHT) achieve lower variances than others (e. g., Rademacher sketches), and conditions under which the use of complex features leads to lower variances than real features.
no code implementations • 30 Jun 2021 • Giulio Franzese, Dimitrios Milios, Maurizio Filippone, Pietro Michiardi
We revisit the theoretical properties of Hamiltonian stochastic differential equations (SDES) for Bayesian posterior sampling, and we study the two types of errors that arise from numerical SDE simulation: the discretization error and the error due to noisy gradient estimates in the context of data subsampling.
1 code implementation • NeurIPS 2021 • Ba-Hien Tran, Simone Rossi, Dimitrios Milios, Pietro Michiardi, Edwin V. Bonilla, Maurizio Filippone
We develop a novel method for carrying out model selection for Bayesian autoencoders (BAEs) by means of prior hyper-parameter optimization.
no code implementations • 25 Nov 2020 • Ba-Hien Tran, Simone Rossi, Dimitrios Milios, Maurizio Filippone
This poses a challenge because modern neural networks are characterized by a large number of parameters, and the choice of these priors has an uncontrolled effect on the induced functional prior, which is the distribution of the functions obtained by sampling the parameters from their prior distribution.
no code implementations • pproximateinference AABI Symposium 2021 • Dimitrios Milios, Pietro Michiardi, Maurizio Filippone
In this paper, we employ variational arguments to establish a connection between ensemble methods for Neural Networks and Bayesian inference.
no code implementations • pproximateinference AABI Symposium 2021 • Ba-Hien Tran, Dimitrios Milios, Simone Rossi, Maurizio Filippone
The Bayesian treatment of neural networks dictates that a prior distribution is considered over the weight and bias parameters of the network.
no code implementations • 10 Nov 2020 • Gia-Lac Tran, Dimitrios Milios, Pietro Michiardi, Maurizio Filippone
In this work, we address one limitation of sparse GPs, which is due to the challenge in dealing with a large number of inducing variables without imposing a special structure on the inducing inputs.
no code implementations • 19 Oct 2020 • Graziano Mita, Maurizio Filippone, Pietro Michiardi
A large part of the literature on learning disentangled representations focuses on variational autoencoders (VAE).
no code implementations • 9 Jun 2020 • Giulio Franzese, Rosa Candela, Dimitrios Milios, Maurizio Filippone, Pietro Michiardi
In this work we define a unified mathematical framework to deepen our understanding of the role of stochastic gradient (SG) noise on the behavior of Markov chain Monte Carlo sampling (SGMCMC) algorithms.
no code implementations • 8 Jun 2020 • Dimitrios Milios, Pietro Michiardi, Maurizio Filippone
In this paper, we employ variational arguments to establish a connection between ensemble methods for Neural Networks and Bayesian inference.
2 code implementations • 16 Mar 2020 • Rosa Candela, Pietro Michiardi, Maurizio Filippone, Maria A. Zuluaga
Accurate travel products price forecasting is a highly desired feature that allows customers to take informed decisions about purchases, and companies to build and offer attractive tour packages.
Applications
no code implementations • 6 Mar 2020 • Simone Rossi, Markus Heinonen, Edwin V. Bonilla, Zheyang Shen, Maurizio Filippone
Variational inference techniques based on inducing variables provide an elegant framework for scalable posterior estimation in Gaussian process (GP) models.
no code implementations • 29 Nov 2019 • Simone Rossi, Sebastien Marmin, Maurizio Filippone
Variational inference offers scalable and flexible tools to tackle intractable Bayesian inference of modern statistical models like Bayesian neural networks and Gaussian processes.
no code implementations • 15 Nov 2019 • Graziano Mita, Paolo Papotti, Maurizio Filippone, Pietro Michiardi
We present a novel method - LIBRE - to learn an interpretable classifier, which materializes as a set of Boolean rules.
1 code implementation • 22 Oct 2019 • Ruben Ohana, Jonas Wacker, Jonathan Dong, Sébastien Marmin, Florent Krzakala, Maurizio Filippone, Laurent Daudet
Approximating kernel functions with random features (RFs)has been a successful application of random projections for nonparametric estimation.
no code implementations • 21 Oct 2019 • Rosa Candela, Giulio Franzese, Maurizio Filippone, Pietro Michiardi
Large scale machine learning is increasingly relying on distributed optimization, whereby several machines contribute to the training process of a statistical model.
no code implementations • 6 Jun 2019 • Andrew Zammit-Mangion, Tin Lok James Ng, Quan Vu, Maurizio Filippone
Spatial processes with nonstationary and anisotropic covariance structure are often used when modelling, analysing and predicting complex environmental phenomena.
no code implementations • NeurIPS 2020 • Simone Rossi, Sebastien Marmin, Maurizio Filippone
Over-parameterized models, such as DeepNets and ConvNets, form a class of models that are routinely adopted in a wide variety of applications, and for which Bayesian inference is desirable but extremely challenging.
no code implementations • 28 Feb 2019 • Rémi Domingues, Pietro Michiardi, Jérémie Barlet, Maurizio Filippone
The identification of anomalies in temporal data is a core component of numerous research areas such as intrusion detection, fault prevention, genomics and fraud detection.
no code implementations • 29 Oct 2018 • Sébastien Marmin, Maurizio Filippone
Bayesian calibration of black-box computer models offers an established framework to obtain a posterior distribution over model parameters.
no code implementations • 18 Oct 2018 • Simone Rossi, Pietro Michiardi, Maurizio Filippone
Stochastic variational inference is an established way to carry out approximate Bayesian inference for deep models.
1 code implementation • NeurIPS 2018 • Dimitrios Milios, Raffaello Camoriano, Pietro Michiardi, Lorenzo Rosasco, Maurizio Filippone
In this paper, we study the problem of deriving fast and accurate classification algorithms with uncertainty quantification.
1 code implementation • 26 May 2018 • Gia-Lac Tran, Edwin V. Bonilla, John P. Cunningham, Pietro Michiardi, Maurizio Filippone
The wide adoption of Convolutional Neural Networks (CNNs) in applications where decision-making under uncertainty is fundamental, has brought a great deal of attention to the ability of these models to accurately quantify the uncertainty in their predictions.
no code implementations • ICML 2018 • Marco Lorenzi, Maurizio Filippone
We introduce a novel generative formulation of deep probabilistic models implementing "soft" constraints on their function dynamics.
1 code implementation • NeurIPS 2019 • Christopher Nemeth, Fredrik Lindsten, Maurizio Filippone, James Hensman
In this paper, we introduce the pseudo-extended MCMC method as a simple approach for improving the mixing of the MCMC sampler for multi-modal posterior distributions.
1 code implementation • 24 Apr 2017 • Jack Fitzsimons, Diego Granziol, Kurt Cutajar, Michael Osborne, Maurizio Filippone, Stephen Roberts
The scalable calculation of matrix determinants has been a bottleneck to the widespread application of many machine learning methods such as determinantal point processes, Gaussian processes, generalised Markov random fields, graph models and many others.
no code implementations • 5 Apr 2017 • Jack Fitzsimons, Kurt Cutajar, Michael Osborne, Stephen Roberts, Maurizio Filippone
The log-determinant of a kernel matrix appears in a variety of machine learning problems, ranging from determinantal point processes and generalized Markov random fields, through to the training of Gaussian processes.
no code implementations • 18 Oct 2016 • Karl Krauth, Edwin V. Bonilla, Kurt Cutajar, Maurizio Filippone
We investigate the capabilities and limitations of Gaussian process models by jointly exploring three complementary directions: (i) scalable and statistically efficient inference; (ii) flexible kernels; and (iii) objective functions for hyperparameter learning alternative to the marginal likelihood.
1 code implementation • ICML 2017 • Kurt Cutajar, Edwin V. Bonilla, Pietro Michiardi, Maurizio Filippone
The composition of multiple Gaussian Processes as a Deep Gaussian Process (DGP) enables a deep probabilistic nonparametric approach to flexibly tackle complex machine learning problems with sound quantification of uncertainty.
no code implementations • 7 Jul 2016 • Yufei Han, Maurizio Filippone
The cost of computing the spectrum of Laplacian matrices hinders the application of spectral clustering to large data sets.
1 code implementation • 22 Feb 2016 • Kurt Cutajar, Michael A. Osborne, John P. Cunningham, Maurizio Filippone
Preconditioning is a common approach to alleviating this issue.
no code implementations • NeurIPS 2015 • James Hensman, Alexander G. de G. Matthews, Maurizio Filippone, Zoubin Ghahramani
This paper simultaneously addresses these, using a variational approximation to the posterior which is sparse in support of the function but otherwise free-form.
no code implementations • 22 Jan 2015 • Maurizio Filippone, Raphael Engler
In applications of Gaussian processes where quantification of uncertainty is of primary interest, it is necessary to accurately characterize the posterior distribution over covariance parameters.
no code implementations • 28 Nov 2013 • Maurizio Filippone
The results empirically demonstrate that compared to importance sampling, annealed importance sampling can reduce the variance of the estimate of the marginal likelihood exponentially in the number of data at a computational cost that scales only polynomially.
no code implementations • 2 Oct 2013 • Maurizio Filippone, Mark Girolami
The main challenges that arise when adopting Gaussian Process priors in probabilistic modeling are how to carry out exact Bayesian inference and how to account for uncertainty on model parameters when making model-based predictions on out-of-sample data.