no code implementations • 23 Nov 2023 • Marc Schachtsiek, Simone Rossi, Thomas Hannagan
The more balanced labels increase minority class performance, which in turn allows the model to outperform the previous baseline by 0. 6, 1. 7, and 2. 4 mIoU for budgets of 5%, 10%, and 20%, respectively.
no code implementations • 5 Jul 2023 • Marshall Davey, Charles Puelz, Simone Rossi, Margaret Anne Smith, David R. Wells, Greg Sturgeon, W. Paul Segars, John P. Vavalle, Charles S. Peskin, Boyce E. Griffith
Here we introduce and benchmark a comprehensive mathematical model of cardiac fluid dynamics in the human heart.
1 code implementation • NeurIPS 2023 • Giulio Franzese, Giulio Corallo, Simone Rossi, Markus Heinonen, Maurizio Filippone, Pietro Michiardi
We introduce Functional Diffusion Processes (FDPs), which generalize score-based diffusion models to infinite-dimensional function spaces.
Ranked #21 on Image Generation on CelebA 64x64
no code implementations • 10 Jun 2022 • Giulio Franzese, Simone Rossi, Lixuan Yang, Alessandro Finamore, Dario Rossi, Maurizio Filippone, Pietro Michiardi
Score-based diffusion models are a class of generative models whose dynamics is described by stochastic differential equations that map noise into data.
1 code implementation • NeurIPS 2021 • Ba-Hien Tran, Simone Rossi, Dimitrios Milios, Pietro Michiardi, Edwin V. Bonilla, Maurizio Filippone
We develop a novel method for carrying out model selection for Bayesian autoencoders (BAEs) by means of prior hyper-parameter optimization.
no code implementations • 25 Nov 2020 • Ba-Hien Tran, Simone Rossi, Dimitrios Milios, Maurizio Filippone
This poses a challenge because modern neural networks are characterized by a large number of parameters, and the choice of these priors has an uncontrolled effect on the induced functional prior, which is the distribution of the functions obtained by sampling the parameters from their prior distribution.
no code implementations • pproximateinference AABI Symposium 2021 • Ba-Hien Tran, Dimitrios Milios, Simone Rossi, Maurizio Filippone
The Bayesian treatment of neural networks dictates that a prior distribution is considered over the weight and bias parameters of the network.
no code implementations • 6 Mar 2020 • Simone Rossi, Markus Heinonen, Edwin V. Bonilla, Zheyang Shen, Maurizio Filippone
Variational inference techniques based on inducing variables provide an elegant framework for scalable posterior estimation in Gaussian process (GP) models.
no code implementations • 29 Nov 2019 • Simone Rossi, Sebastien Marmin, Maurizio Filippone
Variational inference offers scalable and flexible tools to tackle intractable Bayesian inference of modern statistical models like Bayesian neural networks and Gaussian processes.
no code implementations • NeurIPS 2020 • Simone Rossi, Sebastien Marmin, Maurizio Filippone
Over-parameterized models, such as DeepNets and ConvNets, form a class of models that are routinely adopted in a wide variety of applications, and for which Bayesian inference is desirable but extremely challenging.
no code implementations • 18 Oct 2018 • Simone Rossi, Pietro Michiardi, Maurizio Filippone
Stochastic variational inference is an established way to carry out approximate Bayesian inference for deep models.