You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • 6 Feb 2024 • He Zhao, Edwin V. Bonilla

We study the problem of automatically discovering Granger causal relations from observational multivariate time-series data.

no code implementations • 4 Feb 2024 • Edwin V. Bonilla, Pantelis Elinas, He Zhao, Maurizio Filippone, Vassili Kitsios, Terry O'Kane

Estimating the structure of a Bayesian network, in the form of a directed acyclic graph (DAG), from observational data is a statistically and computationally hard problem with essential applications in areas such as causal discovery.

1 code implementation • 24 Oct 2023 • Ryan Thompson, Edwin V. Bonilla, Robert Kohn

Estimating the structure of directed acyclic graphs (DAGs) from observational data remains a significant challenge in machine learning.

no code implementations • 29 May 2023 • Tom Blau, Iadine Chades, Amir Dezfouli, Daniel Steinberg, Edwin V. Bonilla

We propose the use of an alternative estimator based on the cross-entropy of the joint model distribution and a flexible proposal distribution.

1 code implementation • 20 Feb 2023 • Xuhui Fan, Edwin V. Bonilla, Terence J. O'Kane, Scott A. Sisson

However, inference in GPSSMs is computationally and statistically challenging due to the large number of latent variables in the model and the strong temporal dependencies between them.

no code implementations • 1 Nov 2022 • Adrian N. Bishop, Edwin V. Bonilla

We consider the Bayesian optimal filtering problem: i. e. estimating some conditional statistics of a latent time-series signal from an observation sequence.

no code implementations • 25 Feb 2022 • Pantelis Elinas, Edwin V. Bonilla

Learning useful node and graph representations with graph neural networks (GNNs) is a challenging task.

1 code implementation • 2 Feb 2022 • Tom Blau, Edwin V. Bonilla, Iadine Chades, Amir Dezfouli

Bayesian approaches developed to solve the optimal design of sequential experiments are mathematically elegant but computationally challenging.

no code implementations • 4 Jul 2021 • Weiming Zhi, Tin Lai, Lionel Ott, Edwin V. Bonilla, Fabio Ramos

Advances in differentiable numerical integrators have enabled the use of gradient descent techniques to learn ordinary differential equations (ODEs).

1 code implementation • NeurIPS 2021 • Ba-Hien Tran, Simone Rossi, Dimitrios Milios, Pietro Michiardi, Edwin V. Bonilla, Maurizio Filippone

We develop a novel method for carrying out model selection for Bayesian autoencoders (BAEs) by means of prior hyper-parameter optimization.

no code implementations • 10 May 2021 • Maud Lemercier, Cristopher Salvi, Thomas Cass, Edwin V. Bonilla, Theodoros Damoulas, Terry Lyons

Making predictions and quantifying their uncertainty when the input data is sequential is a fundamental learning challenge, recently attracting increasing attention.

1 code implementation • 17 Feb 2021 • Louis C. Tiao, Aaron Klein, Matthias Seeger, Edwin V. Bonilla, Cedric Archambeau, Fabio Ramos

Bayesian optimization (BO) is among the most effective and widely-used blackbox optimization methods.

no code implementations • 10 Jun 2020 • Maud Lemercier, Cristopher Salvi, Theodoros Damoulas, Edwin V. Bonilla, Terry Lyons

In this paper, we develop a rigorous mathematical framework for distribution regression where inputs are complex data streams.

no code implementations • 6 Mar 2020 • Simone Rossi, Markus Heinonen, Edwin V. Bonilla, Zheyang Shen, Maurizio Filippone

Variational inference techniques based on inducing variables provide an elegant framework for scalable posterior estimation in Gaussian process (GP) models.

1 code implementation • NeurIPS 2020 • Rui Zhang, Christian J. Walder, Edwin V. Bonilla, Marian-Andrei Rizoiu, Lexing Xie

We show that QP matches quantile functions rather than moments as in EP and has the same mean update but a smaller variance update than EP, thereby alleviating EP's tendency to over-estimate posterior variances.

1 code implementation • NeurIPS 2019 • Virginia Aglietti, Edwin V. Bonilla, Theodoros Damoulas, Sally Cripps

We propose a scalable framework for inference in an inhomogeneous Poisson process modeled by a continuous sigmoidal Cox process that assumes the corresponding intensity function is given by a Gaussian process (GP) prior transformed with a scaled logistic sigmoid function.

1 code implementation • NeurIPS 2020 • Pantelis Elinas, Edwin V. Bonilla, Louis Tiao

We propose a framework that lifts the capabilities of graph convolutional networks (GCNs) to scenarios where no input graph is given and increases their robustness to adversarial attacks.

no code implementations • 10 Mar 2019 • Astrid Dahl, Edwin V. Bonilla

We consider multi-task regression models where observations are assumed to be a linear combination of several latent node and weight functions, all drawn from Gaussian process (GP) priors that allow nonzero covariance between grouped latent functions.

no code implementations • 7 Jun 2018 • Astrid Dahl, Edwin V. Bonilla

We consider multi-task regression models where the observations are assumed to be a linear combination of several latent node functions and weight functions, which are both drawn from Gaussian process priors.

no code implementations • 5 Jun 2018 • Louis C. Tiao, Edwin V. Bonilla, Fabio Ramos

We formalize the problem of learning interdomain correspondences in the absence of paired data as Bayesian inference in a latent variable model (LVM), where one seeks the underlying hidden representations of entities from one domain as entities from the other domain.

1 code implementation • 26 May 2018 • Gia-Lac Tran, Edwin V. Bonilla, John P. Cunningham, Pietro Michiardi, Maurizio Filippone

The wide adoption of Convolutional Neural Networks (CNNs) in applications where decision-making under uncertainty is fundamental, has brought a great deal of attention to the ability of these models to accurately quantify the uncertainty in their predictions.

no code implementations • 27 Feb 2017 • Amir Dezfouli, Edwin V. Bonilla, Richard Nock

We propose a network structure discovery model for continuous observations that generalizes linear causal models by incorporating a Gaussian process (GP) prior on a network-independent component, and random sparsity and weight matrices as the network-dependent parameters.

no code implementations • 18 Oct 2016 • Karl Krauth, Edwin V. Bonilla, Kurt Cutajar, Maurizio Filippone

We investigate the capabilities and limitations of Gaussian process models by jointly exploring three complementary directions: (i) scalable and statistically efficient inference; (ii) flexible kernels; and (iii) objective functions for hyperparameter learning alternative to the marginal likelihood.

1 code implementation • ICML 2017 • Kurt Cutajar, Edwin V. Bonilla, Pietro Michiardi, Maurizio Filippone

The composition of multiple Gaussian Processes as a Deep Gaussian Process (DGP) enables a deep probabilistic nonparametric approach to flexibly tackle complex machine learning problems with sound quantification of uncertainty.

no code implementations • 14 Sep 2016 • Pietro Galliani, Amir Dezfouli, Edwin V. Bonilla, Novi Quadrianto

We develop an automated variational inference method for Bayesian structured prediction problems with Gaussian process (GP) priors and linear-chain likelihoods.

1 code implementation • 2 Sep 2016 • Edwin V. Bonilla, Karl Krauth, Amir Dezfouli

We evaluate our approach quantitatively and qualitatively with experiments on small datasets, medium-scale datasets and large datasets, showing its competitiveness under different likelihood models and sparsity levels.

no code implementations • NeurIPS 2015 • Amir Dezfouli, Edwin V. Bonilla

We propose a sparse method for scalable automated variational inference (AVI) in a large class of models with Gaussian process (GP) priors, multiple latent functions, multiple outputs and non-linear likelihoods.

no code implementations • NeurIPS 2014 • Trung V. Nguyen, Edwin V. Bonilla

Using a mixture of Gaussians as the variational distribution, we show that (i) the variational objective and its gradients can be approximated efficiently via sampling from univariate Gaussian distributions and (ii) the gradients of the GP hyperparameters can be obtained analytically regardless of the model likelihood.

no code implementations • NeurIPS 2014 • Daniel M. Steinberg, Edwin V. Bonilla

We present two new methods for inference in Gaussian process (GP) models with general nonlinear likelihoods.

no code implementations • NeurIPS 2011 • David Newman, Edwin V. Bonilla, Wray Buntine

To overcome this, we propose two methods to regularize the learning of topic models.

no code implementations • NeurIPS 2010 • Shengbo Guo, Scott Sanner, Edwin V. Bonilla

Bayesian approaches to preference elicitation (PE) are particularly attractive due to their ability to explicitly model uncertainty in users' latent utility functions.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.