1 code implementation • 5 Dec 2024 • Luis A. Ortega, Simón Rodríguez-Santana, Daniel Hernández-Lobato
Recently, there has been an increasing interest in performing post-hoc uncertainty estimation about the predictions of pre-trained deep neural networks (DNNs).
1 code implementation • 25 Nov 2024 • Daniel Fernández-Sánchez, Eduardo C. Garrido-Merchán, Daniel Hernández-Lobato
AES is based on the {\alpha}-divergence, that generalizes the KL divergence.
no code implementations • 27 Oct 2023 • Francisco Javier Sáez-Maldonado, Juan Maroñas, Daniel Hernández-Lobato
In this work, we propose a generalization of TGPs named Deep Transformed Gaussian Processes (DTGPs), which follows the trend of concatenating layers of stochastic processes.
1 code implementation • 24 Feb 2023 • Luis A. Ortega, Simón Rodríguez Santana, Daniel Hernández-Lobato
Specifically, its training cost is independent of the number of training points.
no code implementations • 21 Jul 2022 • Simón Rodríguez Santana, Luis A. Ortega, Daniel Hernández-Lobato, Bryan Zaldívar
Model selection in machine learning (ML) is a crucial part of the Bayesian learning procedure.
1 code implementation • 14 Jun 2022 • Luis A. Ortega, Simón Rodríguez Santana, Daniel Hernández-Lobato
This generalization is similar to that of deep GPs over GPs, but it is more flexible due to the use of IPs as the prior distribution over the latent functions.
no code implementations • 30 May 2022 • Juan Maroñas, Daniel Hernández-Lobato
ETGPs exploit the recently proposed Transformed Gaussian Process (TGP), a stochastic process specified by transforming a Gaussian Process using an invertible transformation.
1 code implementation • 10 Apr 2022 • Bahram Jafrasteh, Daniel Hernández-Lobato, Simón Pedro Lubián-López, Isabel Benavente-Fernández
Sparse GPs can be used to compute a predictive distribution for missing data.
1 code implementation • 7 Apr 2022 • Daniel Heestermans Svendsen, Daniel Hernández-Lobato, Luca Martino, Valero Laparra, Alvaro Moreno, Gustau Camps-Valls
Radiative transfer models (RTMs) encode the energy transfer through the atmosphere, and are used to model and understand the Earth system, as well as to estimate the parameters that describe the status of the Earth from satellite observations by inverse modeling.
1 code implementation • 14 Oct 2021 • Simón Rodríguez Santana, Bryan Zaldivar, Daniel Hernández-Lobato
The result is a scalable method for approximate inference with IPs that can tune the prior IP parameters to the data, and that provides accurate non-Gaussian predictive distributions.
no code implementations • 15 Jul 2021 • Bahram Jafrasteh, Carlos Villacampa-Calvo, Daniel Hernández-Lobato
For this, we use a neural network that receives the observed data as an input and outputs the inducing points locations and the parameters of $q$.
no code implementations • ICLR 2021 • Pablo Morales-Alvarez, Daniel Hernández-Lobato, Rafael Molina, José Miguel Hernández-Lobato
Current approaches for uncertainty estimation in deep learning often produce too confident results.
1 code implementation • 2 Nov 2020 • Daniel Fernández-Sánchez, Eduardo C. Garrido-Merchán, Daniel Hernández-Lobato
MESMOC+ is also competitive with other information-based methods for constrained multi-objective Bayesian optimization, but it is significantly faster.
1 code implementation • 1 Apr 2020 • Eduardo C. Garrido-Merchán, Daniel Hernández-Lobato
This article introduces PPESMOC, Parallel Predictive Entropy Search for Multi-objective Bayesian Optimization with Constraints, an information-based batch method for the simultaneous optimization of multiple expensive-to-evaluate black-box functions under the presence of several constraints.
1 code implementation • 28 Jan 2020 • Carlos Villacampa-Calvo, Bryan Zaldivar, Eduardo C. Garrido-Merchán, Daniel Hernández-Lobato
The results obtained show that, although the classification error is similar across methods, the predictive distribution of the proposed methods is better, in terms of the test log-likelihood, than the predictive distribution of a classifier based on GPs that ignores input noise.
1 code implementation • 13 Sep 2019 • Simón Rodríguez Santana, Daniel Hernández-Lobato
Estimating the uncertainty in the predictions is a critical aspect with important applications, and one method to obtain this information is following a Bayesian approach to estimate a posterior distribution on the model parameters.
no code implementations • 28 Jun 2018 • Irene Córdoba, Eduardo C. Garrido-Merchán, Daniel Hernández-Lobato, Concha Bielza, Pedro Larrañaga
We show that the parameters found by a BO method outperform those found by a random search strategy and the expert recommendation.
1 code implementation • 9 May 2018 • Eduardo C. Garrido-Merchán, Daniel Hernández-Lobato
We show that this can lead to problems in the optimization process and describe a more principled approach to account for input variables that are categorical or integer-valued.
no code implementations • ICML 2017 • Carlos Villacampa-Calvo, Daniel Hernández-Lobato
Furthermore, extra assumptions in the approximate inference process make the memory cost independent of $N$.
1 code implementation • 12 Jun 2017 • Eduardo C. Garrido-Merchán, Daniel Hernández-Lobato
We show that this can lead to problems in the optimization process and describe a more principled approach to account for input variables that are integer-valued.
no code implementations • 5 Sep 2016 • Eduardo C. Garrido-Merchán, Daniel Hernández-Lobato
This work presents PESMOC, Predictive Entropy Search for Multi-objective Bayesian Optimization with Constraints, an information-based strategy for the simultaneous optimization of multiple expensive-to-evaluate black-box functions under the presence of several constraints.
no code implementations • 12 Feb 2016 • Thang D. Bui, Daniel Hernández-Lobato, Yingzhen Li, José Miguel Hernández-Lobato, Richard E. Turner
Deep Gaussian processes (DGPs) are multi-layer hierarchical generalisations of Gaussian processes (GPs) and are formally equivalent to neural networks with multiple, infinitely wide hidden layers.
no code implementations • 17 Nov 2015 • Daniel Hernández-Lobato, José Miguel Hernández-Lobato, Amar Shah, Ryan P. Adams
The results show that PESMO produces better recommendations with a smaller number of evaluations of the objectives, and that a decoupled evaluation can lead to improvements in performance, particularly when the number of objectives is large.
no code implementations • 11 Nov 2015 • Thang D. Bui, José Miguel Hernández-Lobato, Yingzhen Li, Daniel Hernández-Lobato, Richard E. Turner
Deep Gaussian processes (DGPs) are multi-layer hierarchical generalisations of Gaussian processes (GPs) and are formally equivalent to neural networks with multiple, infinitely wide hidden layers.
no code implementations • 10 Nov 2015 • Daniel Hernández-Lobato, José Miguel Hernández-Lobato, Yingzhen Li, Thang Bui, Richard E. Turner
A method for large scale Gaussian process classification has been recently proposed based on expectation propagation (EP).
3 code implementations • 10 Nov 2015 • José Miguel Hernández-Lobato, Yingzhen Li, Mark Rowland, Daniel Hernández-Lobato, Thang Bui, Richard E. Turner
Black-box alpha (BB-$\alpha$) is a new approximate inference method based on the minimization of $\alpha$-divergences.
no code implementations • 16 Jul 2015 • Daniel Hernández-Lobato, José Miguel Hernández-Lobato
Variational methods have been recently considered for scaling the training process of Gaussian process classifiers to large datasets.
no code implementations • 16 Sep 2014 • Daniel Hernández-Lobato, Pablo Morales-Mombiela, David Lopez-Paz, Alberto Suárez
The problem of non-linear causal inference is addressed by performing an embedding in an expanded feature space, in which the relation between causes and effects can be assumed to be linear.
no code implementations • NeurIPS 2014 • Daniel Hernández-Lobato, Viktoriia Sharmanska, Kristian Kersting, Christoph H. Lampert, Novi Quadrianto
That is, in contrast to the standard GPC setting, the latent function is not just a nuisance but a feature: it becomes a natural measure of confidence about the training data by modulating the slope of the GPC sigmoid likelihood function.
no code implementations • NeurIPS 2013 • Daniel Hernández-Lobato, José Miguel Hernández-Lobato
Because the process of estimating feature selection dependencies may suffer from over-fitting in the model proposed, additional data from a multi-task learning scenario are considered for induction.
no code implementations • NeurIPS 2013 • José Miguel Hernández-Lobato, James Robert Lloyd, Daniel Hernández-Lobato
The estimation of dependencies between multiple variables is a central problem in the analysis of financial time series.
no code implementations • NeurIPS 2011 • Daniel Hernández-Lobato, Jose M. Hernández-Lobato, Pierre Dupont
When no noise is injected in the labels, RMGPC still performs equal or better than the other methods.