Search Results for author: Matthias Seeger

Found 20 papers, 4 papers with code

Meta-Forecasting by combining Global Deep Representations with Local Adaptation

no code implementations5 Nov 2021 Riccardo Grazzi, Valentin Flunkert, David Salinas, Tim Januschowski, Matthias Seeger, Cedric Archambeau

While classical time series forecasting considers individual time series in isolation, recent advances based on deep learning showed that jointly learning from a large pool of related time series can boost the forecasting accuracy.

Meta-Learning Time Series +1

A Nonmyopic Approach to Cost-Constrained Bayesian Optimization

1 code implementation10 Jun 2021 Eric Hans Lee, David Eriksson, Valerio Perrone, Matthias Seeger

Bayesian optimization (BO) is a popular method for optimizing expensive-to-evaluate black-box functions.

Hyperparameter Optimization

Overfitting in Bayesian Optimization: an empirical study and early-stopping solution

no code implementations16 Apr 2021 Anastasia Makarova, Huibin Shen, Valerio Perrone, Aaron Klein, Jean Baptiste Faddoul, Andreas Krause, Matthias Seeger, Cedric Archambeau

In practice, however, an improvement of the validation metric may not translate in better predictive performance on a test set, especially when tuning models trained on small datasets.

Hyperparameter Optimization

BORE: Bayesian Optimization by Density-Ratio Estimation

1 code implementation17 Feb 2021 Louis C. Tiao, Aaron Klein, Matthias Seeger, Edwin V. Bonilla, Cedric Archambeau, Fabio Ramos

Bayesian optimization (BO) is among the most effective and widely-used blackbox optimization methods.

Density Ratio Estimation

Model-based Asynchronous Hyperparameter and Neural Architecture Search

2 code implementations24 Mar 2020 Aaron Klein, Louis C. Tiao, Thibaut Lienart, Cedric Archambeau, Matthias Seeger

We introduce a model-based asynchronous multi-fidelity method for hyperparameter and neural architecture search that combines the strengths of asynchronous Hyperband and Gaussian process-based Bayesian optimization.

Hyperparameter Optimization Neural Architecture Search

Cost-aware Bayesian Optimization

no code implementations22 Mar 2020 Eric Hans Lee, Valerio Perrone, Cedric Archambeau, Matthias Seeger

Bayesian optimization (BO) is a class of global optimization algorithms, suitable for minimizing an expensive objective function in as few function evaluations as possible.

Global Optimization

Constrained Bayesian Optimization with Max-Value Entropy Search

no code implementations15 Oct 2019 Valerio Perrone, Iaroslav Shcherbatyi, Rodolphe Jenatton, Cedric Archambeau, Matthias Seeger

We propose constrained Max-value Entropy Search (cMES), a novel information theoretic-based acquisition function implementing this formulation.

Hyperparameter Optimization

Learning search spaces for Bayesian optimization: Another view of hyperparameter transfer learning

no code implementations NeurIPS 2019 Valerio Perrone, Huibin Shen, Matthias Seeger, Cedric Archambeau, Rodolphe Jenatton

Despite its simplicity, we show that our approach considerably boosts BO by reducing the size of the search space, thus accelerating the optimization of a variety of black-box optimization problems.

Hyperparameter Optimization Transfer Learning

Auto-Differentiating Linear Algebra

no code implementations24 Oct 2017 Matthias Seeger, Asmus Hetzel, Zhenwen Dai, Eric Meissner, Neil D. Lawrence

Development systems for deep learning (DL), such as Theano, Torch, TensorFlow, or MXNet, are easy-to-use tools for creating complex neural network models.

Active Learning Gaussian Processes

Bayesian Optimization with Tree-structured Dependencies

no code implementations ICML 2017 Rodolphe Jenatton, Cedric Archambeau, Javier González, Matthias Seeger

The benefit of leveraging this structure is twofold: we explore the search space more efficiently and posterior inference scales more favorably with the number of observations than Gaussian Process-based approaches published in the literature.

Gaussian Processes

Speeding up Magnetic Resonance Image Acquisition by Bayesian Multi-Slice Adaptive Compressed Sensing

no code implementations NeurIPS 2009 Matthias Seeger

We show how to sequentially optimize magnetic resonance imaging measurement designs over stacks of neighbouring image slices, by performing convex variational inference on a large scale non-Gaussian linear dynamical system, tracking dominating directions of posterior covariance without imposing any factorization constraints.

Variational Inference

Local Gaussian Process Regression for Real Time Online Model Learning

no code implementations NeurIPS 2008 Duy Nguyen-Tuong, Jan R. Peters, Matthias Seeger

Inspired by local learning, we propose a method to speed up standard Gaussian Process regression (GPR) with local GP models (LGP).

GPR

Bayesian Experimental Design of Magnetic Resonance Imaging Sequences

no code implementations NeurIPS 2008 Hannes Nickisch, Rolf Pohmann, Bernhard Schölkopf, Matthias Seeger

We propose a novel scalable variational inference algorithm, and show how powerful methods of numerical mathematics can be modified to compute primitives in our framework.

Bayesian Inference Experimental Design +1

Cannot find the paper you are looking for? You can Submit a new open access paper.