no code implementations • 30 May 2023 • Zongyu Guo, Gergely Flamich, Jiajun He, Zhibo Chen, José Miguel Hernández-Lobato
Many common types of data can be represented as functions that map coordinates to signal values, such as pixel locations to RGB values in the case of an image.
1 code implementation • 20 Feb 2023 • Riccardo Barbano, Javier Antorán, Johannes Leuschner, José Miguel Hernández-Lobato, Željko Kereta, Bangti Jin
The deep image prior (DIP) is a state-of-the-art unsupervised approach for solving linear inverse problems in imaging.
1 code implementation • 26 Jan 2023 • Vincent Stimper, David Liu, Andrew Campbell, Vincent Berenz, Lukas Ryll, Bernhard Schölkopf, José Miguel Hernández-Lobato
It allows to build normalizing flow models from a suite of base distributions, flow layers, and neural networks.
1 code implementation • 10 Oct 2022 • Javier Antorán, Shreyas Padhy, Riccardo Barbano, Eric Nalisnick, David Janz, José Miguel Hernández-Lobato
Large-scale linear models are ubiquitous throughout machine learning, with contemporary application as surrogate models for neural network uncertainty quantification; that is, the linearised Laplace method.
2 code implementations • 3 Aug 2022 • Laurence Illing Midgley, Vincent Stimper, Gregor N. C. Simm, Bernhard Schölkopf, José Miguel Hernández-Lobato
Normalizing flows are tractable density models that can approximate complicated target distributions, e. g. Boltzmann distributions of physical systems.
1 code implementation • 11 Jul 2022 • Riccardo Barbano, Johannes Leuschner, Javier Antorán, Bangti Jin, José Miguel Hernández-Lobato
We investigate adaptive design based on a single sparse pilot scan for generating effective scanning strategies for computed tomography reconstruction.
no code implementations • 17 Jun 2022 • Javier Antorán, David Janz, James Urquhart Allingham, Erik Daxberger, Riccardo Barbano, Eric Nalisnick, José Miguel Hernández-Lobato
The linearised Laplace method for estimating model uncertainty has received renewed attention in the Bayesian deep learning community.
1 code implementation • 5 May 2022 • Wenlin Chen, Austin Tripp, José Miguel Hernández-Lobato
We propose Adaptive Deep Kernel Fitting with Implicit Function Theorem (ADKF-IFT), a novel framework for learning deep kernel Gaussian processes (GPs) by interpolating between meta-learning and conventional deep kernel learning.
2 code implementations • 28 Feb 2022 • Javier Antorán, Riccardo Barbano, Johannes Leuschner, José Miguel Hernández-Lobato, Bangti Jin
Existing deep-learning based tomographic image reconstruction methods do not provide accurate estimates of reconstruction uncertainty, hindering their real-world deployment.
1 code implementation • 9 Feb 2022 • Ignacio Peis, Chao Ma, José Miguel Hernández-Lobato
Our experiments show that HH-VAEM outperforms existing baselines in the tasks of missing data imputation and supervised learning with missing features.
no code implementations • NeurIPS Workshop ICBINB 2021 • Chelsea Murray, James U. Allingham, Javier Antorán, José Miguel Hernández-Lobato
Farquhar et al. [2021] show that correcting for active learning bias with underparameterised models leads to improved downstream performance.
no code implementations • 13 Dec 2021 • Chelsea Murray, James U. Allingham, Javier Antorán, José Miguel Hernández-Lobato
In active learning, the size and complexity of the training dataset changes over time.
no code implementations • NeurIPS 2021 • Chao Ma, José Miguel Hernández-Lobato
In this paper, we propose a new solution to this problem called Functional Variational Inference (FVI).
1 code implementation • pproximateinference AABI Symposium 2022 • Laurence Illing Midgley, Vincent Stimper, Gregor N. C. Simm, José Miguel Hernández-Lobato
Normalizing flows are flexible, parameterized distributions that can be used to approximate expectations from intractable distributions via importance sampling.
no code implementations • pproximateinference AABI Symposium 2022 • Riccardo Barbano, Javier Antoran, José Miguel Hernández-Lobato, Bangti Jin
The deep image prior regularises under-specified image reconstruction problems by reparametrising the target image as the output of a CNN.
no code implementations • pproximateinference AABI Symposium 2022 • Javier Antoran, James Urquhart Allingham, David Janz, Erik Daxberger, Eric Nalisnick, José Miguel Hernández-Lobato
We show that for neural networks (NN) with normalisation layers, i. e. batch norm, layer norm, or group norm, the Laplace model evidence does not approximate the volume of a posterior mode and is thus unsuitable for model selection.
1 code implementation • 29 Oct 2021 • Vincent Stimper, Bernhard Schölkopf, José Miguel Hernández-Lobato
Normalizing flows are a popular class of models for approximating probability distributions.
Ranked #42 on
Image Generation
on CIFAR-10
(bits/dimension metric)
1 code implementation • 29 Oct 2021 • Miguel García-Ortegón, Gregor N. C. Simm, Austin J. Tripp, José Miguel Hernández-Lobato, Andreas Bender, Sergio Bacallado
The field of machine learning for drug discovery is witnessing an explosion of novel methods.
1 code implementation • ICLR 2022 • Ross M. Clarke, Elre T. Oldewage, José Miguel Hernández-Lobato
Machine learning training methods depend plentifully and intricately on hyperparameters, motivating automated strategies for their optimisation.
no code implementations • 12 Oct 2021 • Biwei Huang, Chaochao Lu, Liu Leqi, José Miguel Hernández-Lobato, Clark Glymour, Bernhard Schölkopf, Kun Zhang
Perceived signals in real-world scenarios are usually high-dimensional and noisy, and finding and using their representation that contains essential and sufficient information required by downstream decision-making tasks will help improve computational efficiency and generalization ability in the tasks.
no code implementations • ICLR 2022 • Chaochao Lu, Yuhuai Wu, José Miguel Hernández-Lobato, Bernhard Schölkopf
Extensive experiments on both synthetic and real-world datasets show that our approach outperforms a variety of baseline methods.
no code implementations • NeurIPS Workshop AI4Scien 2021 • Austin Tripp, Gregor N. C. Simm, José Miguel Hernández-Lobato
De novo molecular design is a thriving research area in machine learning (ML) that lacks ubiquitous, high-quality, standardized benchmark tasks.
1 code implementation • NeurIPS 2021 • Pascal Notin, José Miguel Hernández-Lobato, Yarin Gal
Optimization in the latent space of variational autoencoders is a promising approach to generate high-dimensional discrete objects that maximize an expensive black-box property (e. g., drug-likeness in molecular generation, function approximation with arithmetic expressions).
no code implementations • 12 Apr 2021 • Angus Lamb, Evgeny Saveliev, Yingzhen Li, Sebastian Tschiatschek, Camilla Longden, Simon Woodhead, José Miguel Hernández-Lobato, Richard E. Turner, Pashmina Cameron, Cheng Zhang
While deep learning has obtained state-of-the-art results in many applications, the adaptation of neural network architectures to incorporate new output features remains a challenge, as neural networks are commonly trained to produce a fixed output dimension.
1 code implementation • 5 Feb 2021 • Wenbo Gong, Kaibo Zhang, Yingzhen Li, José Miguel Hernández-Lobato
First, we provide theoretical results stating that the requirement of using optimal slicing directions in the kernelized version of SSD can be relaxed, validating the resulting discrepancy with finite random slicing directions.
no code implementations • ICLR 2021 • Pablo Morales-Alvarez, Daniel Hernández-Lobato, Rafael Molina, José Miguel Hernández-Lobato
Current approaches for uncertainty estimation in deep learning often produce too confident results.
no code implementations • 1 Jan 2021 • Andrew Campbell, Wenlong Chen, Vincent Stimper, José Miguel Hernández-Lobato, Yichuan Zhang
Existing approaches for automating this task either optimise a proxy for mixing speed or consider the HMC chain as an implicit variational distribution and optimize a tractable lower bound that is too loose to be useful in practice.
no code implementations • 1 Jan 2021 • Chaochao Lu, Yuhuai Wu, José Miguel Hernández-Lobato, Bernhard Schölkopf
As an alternative, we propose Invariant Causal Representation Learning (ICRL), a learning paradigm that enables out-of-distribution generalization in the nonlinear setting (i. e., nonlinear representations and nonlinear classifiers).
1 code implementation • NeurIPS 2020 • John Bradshaw, Brooks Paige, Matt J. Kusner, Marwin H. S. Segler, José Miguel Hernández-Lobato
When designing new molecules with particular properties, it is not only important what to make but crucially how to make it.
no code implementations • 16 Dec 2020 • Chaochao Lu, Biwei Huang, Ke Wang, José Miguel Hernández-Lobato, Kun Zhang, Bernhard Schölkopf
We propose counterfactual RL algorithms to learn both population-level and individual-level policies.
no code implementations • 2 Dec 2020 • Weijie He, Xiaohao Mao, Chao Ma, Yu Huang, José Miguel Hernández-Lobato, Ting Chen
To address the challenge, we propose a non-RL Bipartite Scalable framework for Online Disease diAgnosis, called BSODA.
1 code implementation • ICLR 2021 • Gregor N. C. Simm, Robert Pinsler, Gábor Csányi, José Miguel Hernández-Lobato
Automating molecular design using deep reinforcement learning (RL) has the potential to greatly accelerate the search for novel materials.
2 code implementations • 28 Oct 2020 • Erik Daxberger, Eric Nalisnick, James Urquhart Allingham, Javier Antorán, José Miguel Hernández-Lobato
In particular, we implement subnetwork linearized Laplace as a simple, scalable Bayesian deep learning method: We first obtain a MAP estimate of all weights and then infer a full-covariance Gaussian posterior over a subnetwork using the linearized Laplace approximation.
1 code implementation • NeurIPS 2020 • Gergely Flamich, Marton Havasi, José Miguel Hernández-Lobato
Variational Autoencoders (VAEs) have seen widespread use in learned image compression.
no code implementations • pproximateinference AABI Symposium 2021 • Erik Daxberger, Eric Nalisnick, James Allingham, Javier Antoran, José Miguel Hernández-Lobato
In particular, we develop a practical and scalable Bayesian deep learning method that first trains a point estimate, and then infers a full covariance Gaussian posterior approximation over a subnetwork.
no code implementations • 23 Jul 2020 • Zichao Wang, Angus Lamb, Evgeny Saveliev, Pashmina Cameron, Yordan Zaykov, José Miguel Hernández-Lobato, Richard E. Turner, Richard G. Baraniuk, Craig Barton, Simon Peyton Jones, Simon Woodhead, Cheng Zhang
In this competition, participants will focus on the students' answer records to these multiple-choice diagnostic questions, with the aim of 1) accurately predicting which answers the students provide; 2) accurately predicting which questions have high quality; and 3) determining a personalized sequence of questions for each student that best predicts the student's answers.
no code implementations • 16 Jul 2020 • Luke Harries, Rebekah Storan Clarke, Timothy Chapman, Swamy V. P. L. N. Nallamalli, Levent Ozgur, Shuktika Jain, Alex Leung, Steve Lim, Aaron Dietrich, José Miguel Hernández-Lobato, Tom Ellis, Cheng Zhang, Kamil Ciosek
Efficient software testing is essential for productive software development and reliable user experiences.
1 code implementation • ICLR 2021 • Wenbo Gong, Yingzhen Li, José Miguel Hernández-Lobato
Kernelized Stein discrepancy (KSD), though being extensively used in goodness-of-fit tests and model learning, suffers from the curse-of-dimensionality.
2 code implementations • NeurIPS 2020 • Chao Ma, Sebastian Tschiatschek, José Miguel Hernández-Lobato, Richard Turner, Cheng Zhang
Deep generative models often perform poorly in real-world applications due to the heterogeneity of natural data sets.
no code implementations • 18 Jun 2020 • Eric Nalisnick, Jonathan Gordon, José Miguel Hernández-Lobato
For this reason, we propose predictive complexity priors: a functional prior that is defined by comparing the model's predictions to those of a reference model.
1 code implementation • NeurIPS 2020 • Austin Tripp, Erik Daxberger, José Miguel Hernández-Lobato
We introduce an improved method for efficient black-box optimization, which performs the optimization in the low-dimensional, continuous latent manifold learned by a deep generative model.
Ranked #1 on
Molecular Graph Generation
on ZINC
1 code implementation • NeurIPS 2020 • Javier Antorán, James Urquhart Allingham, José Miguel Hernández-Lobato
Existing methods for estimating uncertainty in deep learning tend to require multiple forward passes, making them unsuitable for applications where computational resources are limited.
1 code implementation • ICLR 2021 • Javier Antorán, Umang Bhatt, Tameem Adel, Adrian Weller, José Miguel Hernández-Lobato
Both uncertainty estimation and interpretability are important factors for trustworthy machine learning systems.
1 code implementation • 15 May 2020 • Alonso Marco, Alexander von Rohr, Dominik Baumann, José Miguel Hernández-Lobato, Sebastian Trimpe
When learning to ride a bike, a child falls down a number of times before achieving the first success.
1 code implementation • ICML 2020 • Gregor N. C. Simm, Robert Pinsler, José Miguel Hernández-Lobato
Automating molecular design using deep reinforcement learning (RL) holds the promise of accelerating the discovery of new chemical compounds.
1 code implementation • 6 Feb 2020 • Javier Antorán, James Urquhart Allingham, José Miguel Hernández-Lobato
One-shot neural architecture search allows joint learning of weights and network architecture, reducing computational cost.
no code implementations • 11 Dec 2019 • Erik Daxberger, José Miguel Hernández-Lobato
Despite their successes, deep neural networks may make unreliable predictions when faced with test data drawn from a distribution different to that of the training data, constituting a major problem for AI safety.
1 code implementation • NeurIPS 2019 • Wenbo Gong, Sebastian Tschiatschek, Sebastian Nowozin, Richard E. Turner, José Miguel Hernández-Lobato, Cheng Zhang
In this paper, we address the ice-start problem, i. e., the challenge of deploying machine learning models when only a little or no training data is initially available, and acquiring each feature element of data is associated with costs.
no code implementations • 25 Sep 2019 • Marton Havasi, Jasper Snoek, Dustin Tran, Jonathan Gordon, José Miguel Hernández-Lobato
Variational inference (VI) is a popular approach for approximate Bayesian inference that is particularly promising for highly parameterized models such as deep neural networks.
1 code implementation • ICML 2020 • Gregor N. C. Simm, José Miguel Hernández-Lobato
Great computational effort is invested in generating equilibrium states for molecular systems using, for example, Markov chain Monte Carlo.
no code implementations • 25 Sep 2019 • Gergely Flamich, Marton Havasi, José Miguel Hernández-Lobato
Standard compression algorithms work by mapping an image to discrete code using an encoder from which the original image can be reconstructed through a decoder.
1 code implementation • 13 Aug 2019 • Wenbo Gong, Sebastian Tschiatschek, Richard Turner, Sebastian Nowozin, José Miguel Hernández-Lobato, Cheng Zhang
In this paper we introduce the ice-start problem, i. e., the challenge of deploying machine learning models when only little or no training data is initially available, and acquiring each feature element of data is associated with costs.
1 code implementation • NeurIPS 2019 • Robert Pinsler, Jonathan Gordon, Eric Nalisnick, José Miguel Hernández-Lobato
Leveraging the wealth of unlabeled data produced in recent years provides great potential for improving supervised models.
no code implementations • 27 Jun 2019 • Andrew Y. K. Foong, Yingzhen Li, José Miguel Hernández-Lobato, Richard E. Turner
We describe a limitation in the expressiveness of the predictive uncertainty estimate given by mean-field variational inference (MFVI), a popular approximate inference method for Bayesian neural networks.
1 code implementation • NeurIPS 2019 • John Bradshaw, Brooks Paige, Matt J. Kusner, Marwin H. S. Segler, José Miguel Hernández-Lobato
Deep generative models are able to suggest new organic molecules by generating strings, trees, and graphs representing their structure.
no code implementations • 23 May 2019 • Omar Mahmood, José Miguel Hernández-Lobato
Carrying out global optimisation is difficult as optimisers are likely to follow gradients into regions of the latent space that the model has not been exposed to during training; samples generated from these regions are likely to be too dissimilar to the training data to be useful.
2 code implementations • 7 May 2019 • Hiske Overweg, Anna-Lena Popkes, Ari Ercole, Yingzhen Li, José Miguel Hernández-Lobato, Yordan Zaykov, Cheng Zhang
However, flexible tools such as artificial neural networks (ANNs) suffer from a lack of interpretability limiting their acceptability to clinicians.
no code implementations • ICLR Workshop DeepGenStruct 2019 • John Bradshaw, Matt J. Kusner, Brooks Paige, Marwin H. S. Segler, José Miguel Hernández-Lobato
We therefore propose a new molecule generation model, mirroring a more realistic real-world process, where reactants are selected and combined to form more complex molecules.
1 code implementation • 26 Dec 2018 • Chaochao Lu, Bernhard Schölkopf, José Miguel Hernández-Lobato
Using this benchmark, we demonstrate that the proposed algorithms are superior to traditional RL methods in confounded environments with observational data.
2 code implementations • NeurIPS 2019 • David Janz, Jiri Hron, Przemysław Mazur, Katja Hofmann, José Miguel Hernández-Lobato, Sebastian Tschiatschek
Posterior sampling for reinforcement learning (PSRL) is an effective method for balancing exploration and exploitation in reinforcement learning.
1 code implementation • 9 Oct 2018 • Eric Nalisnick, José Miguel Hernández-Lobato, Padhraic Smyth
We propose a novel framework for understanding multiplicative noise in neural networks, considering continuous distributions as well as Bernoulli noise (i. e. dropout).
2 code implementations • ICLR 2019 • Anqi Wu, Sebastian Nowozin, Edward Meeds, Richard E. Turner, José Miguel Hernández-Lobato, Alexander L. Gaunt
We provide two innovations that aim to turn VB into a robust inference tool for Bayesian neural networks: first, we introduce a novel deterministic method to approximate moments in neural networks, eliminating gradient variance; second, we introduce a hierarchical prior for parameters and a novel Empirical Bayes procedure for automatically selecting prior variances.
2 code implementations • ICLR 2019 • Marton Havasi, Robert Peharz, José Miguel Hernández-Lobato
While deep neural networks are a highly successful model class, their large memory footprint puts considerable strain on energy consumption, communication bandwidth, and storage requirements.
1 code implementation • ICLR 2019 • Chao Ma, Sebastian Tschiatschek, Konstantina Palla, José Miguel Hernández-Lobato, Sebastian Nowozin, Cheng Zhang
Many real-life decision-making situations allow further relevant information to be acquired at a specific cost, for example, in assessing the health status of a patient we may decide to take additional measurements such as diagnostic tests or imaging scans before making a final assessment.
no code implementations • 27 Sep 2018 • Yichuan Zhang, José Miguel Hernández-Lobato, Zoubin Ghahramani
Training probabilistic models with neural network components is intractable in most cases and requires to use approximations such as Markov chain Monte Carlo (MCMC), which is not scalable and requires significant hyper-parameter tuning, or mean-field variational inference (VI), which is biased.
2 code implementations • NeurIPS 2018 • Marton Havasi, José Miguel Hernández-Lobato, Juan José Murillo-Fuentes
The current state-of-the-art inference method, Variational Inference (VI), employs a Gaussian approximation to the posterior distribution.
1 code implementation • ICLR 2019 • Wenbo Gong, Yingzhen Li, José Miguel Hernández-Lobato
Stochastic gradient Markov chain Monte Carlo (SG-MCMC) has become increasingly popular for simulating posterior samples in large-scale Bayesian modeling.
1 code implementation • 6 Jun 2018 • Chao Ma, Yingzhen Li, José Miguel Hernández-Lobato
We introduce the implicit processes (IPs), a stochastic process that places implicitly defined multivariate distributions over any finite collections of random variables.
no code implementations • 25 May 2018 • Yichuan Zhang, José Miguel Hernández-Lobato
In this work, we aim to improve upon MCMC and VI by a novel hybrid method based on the idea of reducing simulation bias of finite-length MCMC chains using gradient-based optimisation.
no code implementations • ICLR 2019 • John Bradshaw, Matt J. Kusner, Brooks Paige, Marwin H. S. Segler, José Miguel Hernández-Lobato
Chemical reactions can be described as the stepwise redistribution of electrons in molecules.
no code implementations • 12 Feb 2018 • Moritz August, José Miguel Hernández-Lobato
In this work we introduce the application of black-box quantum control as an interesting rein- forcement learning problem to the machine learning community.
no code implementations • 9 Jan 2018 • Marton Havasi, José Miguel Hernández-Lobato, Juan José Murillo-Fuentes
Deep Gaussian Processes (DGP) are hierarchical generalizations of Gaussian Processes (GP) that have proven to work effectively on a multiple supervised regression tasks.
no code implementations • 10 Dec 2017 • Stefan Depeweg, José Miguel Hernández-Lobato, Steffen Udluft, Thomas Runkler
We derive a novel sensitivity analysis of input variables for predictive epistemic and aleatoric uncertainty.
1 code implementation • ICLR 2018 • David Janz, Jos van der Westhuizen, Brooks Paige, Matt J. Kusner, José Miguel Hernández-Lobato
This validator provides insight as to how individual sequence elements influence the validity of the overall sequence, and can be used to constrain sequence based models to generate valid sequences -- and thus faithfully model discrete objects.
no code implementations • ICML 2018 • Stefan Depeweg, José Miguel Hernández-Lobato, Finale Doshi-Velez, Steffen Udluft
Bayesian neural networks with latent variables are scalable and flexible probabilistic models: They account for uncertainty in the estimation of the network weights and, by making use of latent variables, can capture complex noise patterns in the data.
1 code implementation • 16 Sep 2017 • Ryan-Rhys Griffiths, José Miguel Hernández-Lobato
Automatic Chemical Design is a framework for generating novel molecules with optimized properties.
no code implementations • 15 Aug 2017 • David Janz, Jos van der Westhuizen, José Miguel Hernández-Lobato
As a step towards solving this problem, we propose to learn a deep recurrent validator model.
no code implementations • 29 Jun 2017 • Jonathan Gordon, José Miguel Hernández-Lobato
However, these techniques a) cannot account for model uncertainty in the estimation of the model's discriminative component and b) lack flexibility to capture complex stochastic patterns in the label generation process.
no code implementations • 26 Jun 2017 • Stefan Depeweg, José Miguel Hernández-Lobato, Finale Doshi-Velez, Steffen Udluft
Bayesian neural networks (BNNs) with latent variables are probabilistic models which can automatically identify complex stochastic patterns in the data.
no code implementations • ICML 2017 • José Miguel Hernández-Lobato, James Requeima, Edward O. Pyzer-Knapp, Alán Aspuru-Guzik
These results show that PDTS is a successful solution for large-scale parallel BO.
3 code implementations • ICML 2017 • Matt J. Kusner, Brooks Paige, José Miguel Hernández-Lobato
Crucially, state-of-the-art methods often produce outputs that are not valid.
no code implementations • 12 Nov 2016 • Matt J. Kusner, José Miguel Hernández-Lobato
Generative Adversarial Networks (GAN) have limitations when the goal is to generate sequences of discrete elements.
no code implementations • ICML 2017 • Natasha Jaques, Shixiang Gu, Dzmitry Bahdanau, José Miguel Hernández-Lobato, Richard E. Turner, Douglas Eck
This paper proposes a general method for improving the structure and quality of sequences generated by a recurrent neural network (RNN), while maintaining information originally learned from data, as well as sample diversity.
10 code implementations • 7 Oct 2016 • Rafael Gómez-Bombarelli, Jennifer N. Wei, David Duvenaud, José Miguel Hernández-Lobato, Benjamín Sánchez-Lengeling, Dennis Sheberla, Jorge Aguilera-Iparraguirre, Timothy D. Hirzel, Ryan P. Adams, Alán Aspuru-Guzik
We report a method to convert discrete representations of molecules to and from a multidimensional continuous representation.
2 code implementations • 23 May 2016 • Stefan Depeweg, José Miguel Hernández-Lobato, Finale Doshi-Velez, Steffen Udluft
We present an algorithm for model-based reinforcement learning that combines Bayesian neural networks (BNNs) with random roll-outs and stochastic optimization for policy learning.
Model-based Reinforcement Learning
reinforcement-learning
+2
no code implementations • 12 Feb 2016 • Thang D. Bui, Daniel Hernández-Lobato, Yingzhen Li, José Miguel Hernández-Lobato, Richard E. Turner
Deep Gaussian processes (DGPs) are multi-layer hierarchical generalisations of Gaussian processes (GPs) and are formally equivalent to neural networks with multiple, infinitely wide hidden layers.
1 code implementation • 30 Nov 2015 • José Miguel Hernández-Lobato, Michael A. Gelbart, Ryan P. Adams, Matthew W. Hoffman, Zoubin Ghahramani
Of particular interest to us is to efficiently solve problems with decoupled constraints, in which subsets of the objective and constraint functions may be evaluated independently.
no code implementations • 17 Nov 2015 • Daniel Hernández-Lobato, José Miguel Hernández-Lobato, Amar Shah, Ryan P. Adams
The results show that PESMO produces better recommendations with a smaller number of evaluations of the objectives, and that a decoupled evaluation can lead to improvements in performance, particularly when the number of objectives is large.
no code implementations • 11 Nov 2015 • Thang D. Bui, José Miguel Hernández-Lobato, Yingzhen Li, Daniel Hernández-Lobato, Richard E. Turner
Deep Gaussian processes (DGPs) are multi-layer hierarchical generalisations of Gaussian processes (GPs) and are formally equivalent to neural networks with multiple, infinitely wide hidden layers.
3 code implementations • 10 Nov 2015 • José Miguel Hernández-Lobato, Yingzhen Li, Mark Rowland, Daniel Hernández-Lobato, Thang Bui, Richard E. Turner
Black-box alpha (BB-$\alpha$) is a new approximate inference method based on the minimization of $\alpha$-divergences.
no code implementations • 10 Nov 2015 • Daniel Hernández-Lobato, José Miguel Hernández-Lobato, Yingzhen Li, Thang Bui, Richard E. Turner
A method for large scale Gaussian process classification has been recently proposed based on expectation propagation (EP).
no code implementations • 16 Jul 2015 • Daniel Hernández-Lobato, José Miguel Hernández-Lobato
Variational methods have been recently considered for scaling the training process of Gaussian process classifiers to large datasets.
2 code implementations • 18 Feb 2015 • José Miguel Hernández-Lobato, Ryan P. Adams
In principle, the Bayesian approach to learning neural networks does not have these problems.
1 code implementation • 18 Feb 2015 • José Miguel Hernández-Lobato, Michael A. Gelbart, Matthew W. Hoffman, Ryan P. Adams, Zoubin Ghahramani
Unknown constraints arise in many types of expensive black-box optimization problems.
1 code implementation • NeurIPS 2014 • José Miguel Hernández-Lobato, Matthew W. Hoffman, Zoubin Ghahramani
We propose a novel information-theoretic approach for Bayesian optimization called Predictive Entropy Search (PES).
no code implementations • NeurIPS 2013 • Daniel Hernández-Lobato, José Miguel Hernández-Lobato
Because the process of estimating feature selection dependencies may suffer from over-fitting in the model proposed, additional data from a multi-task learning scenario are considered for induction.
no code implementations • NeurIPS 2013 • José Miguel Hernández-Lobato, James Robert Lloyd, Daniel Hernández-Lobato
The estimation of dependencies between multiple variables is a central problem in the analysis of financial time series.
no code implementations • 18 May 2013 • Yue Wu, José Miguel Hernández-Lobato, Zoubin Ghahramani
The accurate prediction of time-changing covariances is an important problem in the modeling of multivariate financial data.