no code implementations • 9 Jan 2025 • Emilia Magnani, Ernesto de Vito, Philipp Hennig, Lorenzo Rosasco
We consider the problem of learning convolution operators associated to compact Abelian groups.
1 code implementation • 19 Dec 2024 • Jonathan Schmidt, Luca Schmidt, Felix Strnad, Nicole Ludwig, Philipp Hennig
We present a novel generative framework that uses a score-based diffusion model trained on high-resolution reanalysis data to capture the statistical properties of local weather dynamics.
no code implementations • 1 Nov 2024 • Jonathan Wenger, Kaiwen Wu, Philipp Hennig, Jacob R. Gardner, Geoff Pleiss, John P. Cunningham
Model selection in Gaussian processes scales prohibitively with the size of the training dataset, both in time and memory.
no code implementations • 18 Oct 2024 • Lukas Tatzel, Bálint Mucsányi, Osane Hackel, Philipp Hennig
Quadratic approximations form a fundamental building block of machine learning methods.
no code implementations • 9 Oct 2024 • Joanna Sliwa, Frank Schneider, Nathanael Bosch, Agustinus Kristiadi, Philipp Hennig
Efficiently learning a sequence of related tasks, such as in continual learning, poses a significant challenge for neural nets due to the delicate trade-off between catastrophic forgetting and loss of plasticity.
no code implementations • 18 Jul 2024 • Tristan Cinquin, Marvin Pförtner, Vincent Fortuin, Philipp Hennig, Robert Bamler
As a remedy, we directly place a prior on function space.
1 code implementation • 4 Jul 2024 • Julia Grosse, Ruotian Wu, Ahmad Rashid, Philipp Hennig, Pascal Poupart, Agustinus Kristiadi
These beliefs are useful for defining a sample-based, non-myopic acquisition function that allows for a more data-efficient exploration scheme than standard search algorithms on LLMs.
1 code implementation • 7 Jun 2024 • Tim Weiland, Marvin Pförtner, Philipp Hennig
Modeling real-world problems with partial differential equations (PDEs) is a prominent topic in scientific machine learning.
no code implementations • 7 Jun 2024 • Emilia Magnani, Marvin Pförtner, Tobias Weber, Philipp Hennig
We introduce LUNO, a novel framework for approximate Bayesian uncertainty quantification in trained neural operators.
no code implementations • 5 Jun 2024 • Hrittik Roy, Marco Miani, Carl Henrik Ek, Philipp Hennig, Marvin Pförtner, Lukas Tatzel, Søren Hauberg
Current approximate posteriors in Bayesian neural networks (BNNs) exhibit a crucial limitation: they fail to maintain invariance under reparameterization, i. e. BNNs assign different posterior densities to different parametrizations of identical functions.
1 code implementation • 31 May 2024 • Martina Contisciani, Marius Hobbhahn, Eleanor A. Power, Philipp Hennig, Caterina De Bacco
In this paper, we develop a probabilistic generative model to perform inference in multilayer networks with arbitrary types of information.
2 code implementations • 14 May 2024 • Marvin Pförtner, Jonathan Wenger, Jon Cockayne, Philipp Hennig
In this work, we propose a probabilistic numerical method for inference in high-dimensional Gauss-Markov models which mitigates these scaling issues.
2 code implementations • 19 Feb 2024 • Jonas Beck, Nathanael Bosch, Michael Deistler, Kyra L. Kadhim, Jakob H. Macke, Philipp Hennig, Philipp Berens
Ordinary differential equations (ODEs) are widely used to describe dynamical systems in science, but identifying parameters that explain experimental measurements is challenging.
no code implementations • 1 Feb 2024 • Theodore Papamarkou, Maria Skoularidou, Konstantina Palla, Laurence Aitchison, Julyan Arbel, David Dunson, Maurizio Filippone, Vincent Fortuin, Philipp Hennig, José Miguel Hernández-Lobato, Aliaksandr Hubin, Alexander Immer, Theofanis Karaletsos, Mohammad Emtiyaz Khan, Agustinus Kristiadi, Yingzhen Li, Stephan Mandt, Christopher Nemeth, Michael A. Osborne, Tim G. J. Rudner, David Rügamer, Yee Whye Teh, Max Welling, Andrew Gordon Wilson, Ruqi Zhang
In the current landscape of deep learning research, there is a predominant emphasis on achieving high predictive accuracy in supervised tasks involving large image and language datasets.
no code implementations • 22 Dec 2023 • Nathaël Da Costa, Marvin Pförtner, Lancelot Da Costa, Philipp Hennig
While applications of GPs are myriad, a comprehensive understanding of GP sample paths, i. e. the function spaces over which they define a probability measure, is lacking.
no code implementations • NeurIPS 2023 • Runa Eschenhagen, Alexander Immer, Richard E. Turner, Frank Schneider, Philipp Hennig
In this work, we identify two different settings of linear weight-sharing layers which motivate two flavours of K-FAC -- $\textit{expand}$ and $\textit{reduce}$.
no code implementations • 31 Oct 2023 • Lukas Tatzel, Jonathan Wenger, Frank Schneider, Philipp Hennig
Bayesian Generalized Linear Models (GLMs) define a flexible probabilistic framework to model categorical, ordinal and continuous data, and are widely used in practice.
1 code implementation • 2 Oct 2023 • Nathanael Bosch, Adrien Corenflos, Fatemeh Yaghoobi, Filip Tronarp, Philipp Hennig, Simo Särkkä
Probabilistic numerical solvers for ordinary differential equations (ODEs) treat the numerical simulation of dynamical systems as problems of Bayesian state estimation.
2 code implementations • NeurIPS 2023 • Jonathan Schmidt, Philipp Hennig, Jörg Nick, Filip Tronarp
In this paper, we propose a novel approximate Gaussian filtering and smoothing method which propagates low-rank approximations of the covariance matrices.
3 code implementations • 12 Jun 2023 • George E. Dahl, Frank Schneider, Zachary Nado, Naman Agarwal, Chandramouli Shama Sastry, Philipp Hennig, Sourabh Medapati, Runa Eschenhagen, Priya Kasimbeg, Daniel Suo, Juhan Bae, Justin Gilmer, Abel L. Peirson, Bilal Khan, Rohan Anil, Mike Rabbat, Shankar Krishnan, Daniel Snider, Ehsan Amid, Kongtao Chen, Chris J. Maddison, Rakshith Vasudev, Michal Badura, Ankush Garg, Peter Mattson
In order to address these challenges, we introduce a new, competitive, time-to-result benchmark using multiple workloads running on fixed hardware, the AlgoPerf: Training Algorithms benchmark.
1 code implementation • NeurIPS 2023 • Nathanael Bosch, Philipp Hennig, Filip Tronarp
However, like standard solvers, they suffer performance penalties for certain stiff systems, where small steps are required not for reasons of numerical accuracy but for the sake of stability.
no code implementations • 22 May 2023 • Katharina Ott, Michael Tiemann, Philipp Hennig
As a first contribution, we show that basic and lightweight Bayesian deep learning techniques like the Laplace approximation can be applied to neural ODEs to yield structured and meaningful uncertainty quantification.
1 code implementation • 22 May 2023 • Katharina Ott, Michael Tiemann, Philipp Hennig, François-Xavier Briol
Bayesian probabilistic numerical methods for numerical integration offer significant advantages over their non-Bayesian counterparts: they can encode prior information about the integrand, and can quantify uncertainty over estimates of an integral.
1 code implementation • 23 Dec 2022 • Marvin Pförtner, Ingo Steinwart, Philipp Hennig, Jonathan Wenger
Crucially, this probabilistic viewpoint allows to (1) quantify the inherent discretization error; (2) propagate uncertainty about the model parameters to the solution; and (3) condition on noisy measurements.
no code implementations • 2 Sep 2022 • Julia Grosse, Cheng Zhang, Philipp Hennig
Bayesian optimization is a popular formalism for global optimization, but its computational costs limit it to expensive-to-evaluate functions.
no code implementations • 2 Aug 2022 • Emilia Magnani, Nicholas Krämer, Runa Eschenhagen, Lorenzo Rosasco, Philipp Hennig
Neural operators are a type of deep architecture that learns to solve (i. e. learns the nonlinear solution operator of) partial differential equations (PDEs).
1 code implementation • 30 May 2022 • Jonathan Wenger, Geoff Pleiss, Marvin Pförtner, Philipp Hennig, John P. Cunningham
For any method in this class, we prove (i) convergence of its posterior mean in the associated RKHS, (ii) decomposability of its combined posterior covariance into mathematical and computational covariances, and (iii) that the combined variance is a tight worst-case bound for the squared error between the method's posterior mean and the latent function.
1 code implementation • 20 May 2022 • Agustinus Kristiadi, Runa Eschenhagen, Philipp Hennig
We show that the resulting posterior approximation is competitive with even the gold-standard full-batch Hamiltonian Monte Carlo.
2 code implementations • 16 May 2022 • Fynn Bachmann, Philipp Hennig, Dmitry Kobak
We use t-SNE to construct 2D embeddings of the units, based on the matrix of pairwise Wasserstein distances between them.
1 code implementation • 7 Mar 2022 • Luca Rendsburg, Agustinus Kristiadi, Philipp Hennig, Ulrike Von Luxburg
By reframing the problem in terms of incompatible conditional distributions we arrive at a natural solution: the Gibbs prior.
1 code implementation • 2 Feb 2022 • Filip Tronarp, Nathanael Bosch, Philipp Hennig
We show how probabilistic numerics can be used to convert an initial value problem into a Gauss--Markov process parametrised by the dynamics of the initial value problem.
1 code implementation • 3 Dec 2021 • Jonathan Wenger, Nicholas Krämer, Marvin Pförtner, Jonathan Schmidt, Nathanael Bosch, Nina Effenberger, Johannes Zenn, Alexandra Gessner, Toni Karvonen, François-Xavier Briol, Maren Mahsereci, Philipp Hennig
Probabilistic numerical methods (PNMs) solve numerical problems via probabilistic inference.
no code implementations • NeurIPS 2021 • Nicholas Krämer, Philipp Hennig
We propose a fast algorithm for the probabilistic solution of boundary value problems (BVPs), which are ordinary differential equations subject to boundary conditions.
no code implementations • 5 Nov 2021 • Runa Eschenhagen, Erik Daxberger, Philipp Hennig, Agustinus Kristiadi
Deep neural networks are prone to overconfident predictions on outliers.
2 code implementations • 22 Oct 2021 • Nicholas Krämer, Jonathan Schmidt, Philipp Hennig
Thereby, we extend the toolbox of probabilistic programs for differential equation simulation to PDEs.
no code implementations • 22 Oct 2021 • Nicholas Krämer, Nathanael Bosch, Jonathan Schmidt, Philipp Hennig
Probabilistic solvers for ordinary differential equations (ODEs) have emerged as an efficient framework for uncertainty quantification and inference on dynamical systems.
2 code implementations • 20 Oct 2021 • Nathanael Bosch, Filip Tronarp, Philipp Hennig
Probabilistic numerical solvers for ordinary differential equations compute posterior distributions over the solution of an initial value problem via Bayesian inference.
no code implementations • 1 Jul 2021 • Jonathan Wenger, Geoff Pleiss, Philipp Hennig, John P. Cunningham, Jacob R. Gardner
While preconditioning is well understood in the context of CG, we demonstrate that it can also accelerate convergence and reduce variance of the estimates for the log-determinant and its derivative.
6 code implementations • NeurIPS 2021 • Erik Daxberger, Agustinus Kristiadi, Alexander Immer, Runa Eschenhagen, Matthias Bauer, Philipp Hennig
Bayesian formulations of deep learning have been shown to have compelling theoretical properties and offer practical functional benefits, such as improved predictive uncertainty quantification and model selection.
1 code implementation • 18 Jun 2021 • Agustinus Kristiadi, Matthias Hein, Philipp Hennig
Despite their compelling theoretical properties, Bayesian neural networks (BNNs) tend to perform worse than frequentist methods in classification-based uncertainty quantification (UQ) tasks such as out-of-distribution (OOD) detection.
no code implementations • 16 Jun 2021 • Julia Grosse, Cheng Zhang, Philipp Hennig
Exciting contemporary machine learning problems have recently been phrased in the classic formalism of tree search -- most famously, the game of Go.
no code implementations • 14 Jun 2021 • Nicholas Krämer, Philipp Hennig
We propose a fast algorithm for the probabilistic solution of boundary value problems (BVPs), which are ordinary differential equations subject to boundary conditions.
5 code implementations • 4 Jun 2021 • Felix Dangel, Lukas Tatzel, Philipp Hennig
Curvature in form of the Hessian or its generalized Gauss-Newton (GGN) approximation is valuable for algorithms that rely on a local model for the loss to train, compress, or explain deep networks.
no code implementations • NeurIPS 2021 • Erik Daxberger, Agustinus Kristiadi, Alexander Immer, Runa Eschenhagen, Matthias Bauer, Philipp Hennig
Bayesian formulations of deep learning have been shown to have compelling theoretical properties and offer practical functional benefits, such as improved predictive uncertainty quantification and model selection.
no code implementations • 13 May 2021 • Matthias Werner, Andrej Junginger, Philipp Hennig, Georg Martius
Our system then utilizes a robust method to learn equations with atomic functions exhibiting singularities, as e. g. logarithm and division.
1 code implementation • 7 May 2021 • Marius Hobbhahn, Philipp Hennig
The method can be thought of as a pre-processing step which can be implemented in <5 lines of code and runs in less than a second.
1 code implementation • NeurIPS 2021 • Jonathan Schmidt, Nicholas Krämer, Philipp Hennig
Mechanistic models with differential equations are a key component of scientific applications of machine learning.
no code implementations • 22 Feb 2021 • Filip de Roos, Carl Jidling, Adrian Wills, Thomas Schön, Philipp Hennig
Machine learning practitioners invest significant manual and computational resources in finding suitable learning rates for optimization algorithms.
1 code implementation • 15 Feb 2021 • Filip de Roos, Alexandra Gessner, Philipp Hennig
Although it is widely known that Gaussian processes can be conditioned on observations of the gradient, this functionality is of limited use due to the prohibitive computational cost of $\mathcal{O}(N^3 D^3)$ in data points $N$ and dimension $D$.
2 code implementations • NeurIPS 2021 • Frank Schneider, Felix Dangel, Philipp Hennig
When engineers train deep learning models, they are very much 'flying blind'.
1 code implementation • 12 Feb 2021 • Christian Fröhlich, Alexandra Gessner, Philipp Hennig, Bernhard Schölkopf, Georgios Arvanitidis
Riemannian manifolds provide a principled way to model nonlinear geometric structure inherent in data.
1 code implementation • 1 Jan 2021 • Robin Marc Schmidt, Frank Schneider, Philipp Hennig
(iii) While we can not discern an optimization method clearly dominating across all tested tasks, we identify a significantly reduced subset of specific algorithms and parameter choices that generally lead to competitive results in our experiments.
no code implementations • ICLR 2021 • Katharina Ott, Prateek Katiyar, Philipp Hennig, Michael Tiemann
If the trained model is supposed to be a flow generated from an ODE, it should be possible to choose another numerical solver with equal or smaller numerical error without loss of performance.
no code implementations • 18 Dec 2020 • Nicholas Krämer, Philipp Hennig
Probabilistic solvers for ordinary differential equations (ODEs) provide efficient quantification of numerical uncertainty associated with simulation of dynamical systems.
1 code implementation • 15 Dec 2020 • Nathanael Bosch, Philipp Hennig, Filip Tronarp
The contraction rate of this error estimate as a function of the solver's step size identifies it as a well-calibrated worst-case error, but its explicit numerical value for a certain step size is not automatically a good estimate of the explicit error.
no code implementations • NeurIPS Workshop ICBINB 2020 • Ricky T. Q. Chen, Dami Choi, Lukas Balles, David Duvenaud, Philipp Hennig
Standard first-order stochastic optimization algorithms base their updates solely on the average mini-batch gradient, and it has been shown that tracking additional quantities such as the curvature can help de-sensitize common hyperparameters.
1 code implementation • NeurIPS 2020 • Jonathan Wenger, Philipp Hennig
Linear systems are the bedrock of virtually all numerical computation.
1 code implementation • 16 Oct 2020 • Alonso Marco, Dominik Baumann, Majid Khadiv, Philipp Hennig, Ludovic Righetti, Sebastian Trimpe
We consider failing behaviors as those that violate a constraint and address the problem of learning with crash constraints, where no data is obtained upon constraint violation.
no code implementations • NeurIPS 2021 • Agustinus Kristiadi, Matthias Hein, Philipp Hennig
We extend finite ReLU BNNs with infinite ReLU features via the GP and show that the resulting model is asymptotically maximally uncertain far away from the data while the BNNs' predictive power is unaffected near the data.
1 code implementation • 6 Oct 2020 • Agustinus Kristiadi, Matthias Hein, Philipp Hennig
Laplace approximations are classic, computationally lightweight means for constructing Bayesian neural networks (BNNs).
no code implementations • 28 Sep 2020 • Agustinus Kristiadi, Matthias Hein, Philipp Hennig
However, far away from the training data, even Bayesian neural networks (BNNs) can still underestimate uncertainty and thus be overconfident.
2 code implementations • 30 Jul 2020 • Katharina Ott, Prateek Katiyar, Philipp Hennig, Michael Tiemann
If the trained model is supposed to be a flow generated from an ODE, it should be possible to choose another numerical solver with equal or smaller numerical error without loss of performance.
1 code implementation • 3 Jul 2020 • Robin M. Schmidt, Frank Schneider, Philipp Hennig
Choosing the optimizer is considered to be among the most crucial design decisions in deep learning, and it is not an easy one.
no code implementations • 1 Apr 2020 • Filip Tronarp, Simo Sarkka, Philipp Hennig
The remaining three classes are termed explicit, semi-implicit, and implicit, which are in similarity with the classical notions corresponding to conditions on the vector field, under which the filter update produces a local maximum a posteriori estimate.
1 code implementation • 2 Mar 2020 • Marius Hobbhahn, Agustinus Kristiadi, Philipp Hennig
We reconsider old work (Laplace Bridge) to construct a Dirichlet approximation of this softmax output distribution, which yields an analytic map between Gaussian distributions in logit space and Dirichlet distributions (the conjugate prior to the Categorical distribution) in the output space.
1 code implementation • ICML 2020 • Agustinus Kristiadi, Matthias Hein, Philipp Hennig
These theoretical results validate the usage of last-layer Bayesian approximation and motivate a range of a fidelity-cost trade-off.
no code implementations • ICML 2020 • Hans Kersting, Nicholas Krämer, Martin Schiegg, Christian Daniel, Michael Tiemann, Philipp Hennig
To address this shortcoming, we employ Gaussian ODE filtering (a probabilistic numerical method for ODEs) to construct a local Gaussian approximation to the likelihood.
1 code implementation • ICLR 2020 • Felix Dangel, Frederik Kunstner, Philipp Hennig
Automatic differentiation frameworks are optimized for exactly one thing: computing the average mini-batch gradient.
no code implementations • 14 Nov 2019 • Simon Bartels, Philipp Hennig
Regularized least-squares (kernel-ridge / Gaussian process) regression is a fundamental algorithm of statistics and machine learning.
2 code implementations • 21 Oct 2019 • Alexandra Gessner, Oindrila Kanjilal, Philipp Hennig
Integrals of linearly constrained multivariate Gaussian densities are a frequent problem in machine learning and statistics, arising in tasks like generalized linear models and Bayesian optimization.
no code implementations • 24 Jul 2019 • Alonso Marco, Dominik Baumann, Philipp Hennig, Sebastian Trimpe
Learning robot controllers by minimizing a black-box objective cost using Bayesian optimization (BO) can be time-consuming and challenging.
no code implementations • 27 Jun 2019 • Michael Lohaus, Philipp Hennig, Ulrike Von Luxburg
To investigate objects without a describable notion of distance, one can gather ordinal information by asking triplet comparisons of the form "Is object $x$ closer to $y$ or is $x$ closer to $z$?"
1 code implementation • NeurIPS 2019 • Frederik Kunstner, Lukas Balles, Philipp Hennig
Natural gradient descent, which preconditions a gradient descent update with the Fisher information matrix of the underlying statistical model, is a way to capture partial second-order information.
no code implementations • NeurIPS 2019 • Motonobu Kanagawa, Philipp Hennig
Adaptive Bayesian quadrature (ABQ) is a powerful approach to numerical integration that empirically compares favorably with Monte Carlo integration on problems of medium dimensionality (where non-adaptive quadrature is not competitive).
1 code implementation • ICLR 2019 • Frank Schneider, Lukas Balles, Philipp Hennig
We suggest routines and benchmarks for stochastic optimization, with special focus on the unique aspects of deep learning, such as stochasticity, tunability and generalization.
1 code implementation • 20 Feb 2019 • Filip de Roos, Philipp Hennig
Pre-conditioning is a well-known concept that can significantly improve the convergence of optimization algorithms.
1 code implementation • 5 Feb 2019 • Felix Dangel, Stefan Harmeling, Philipp Hennig
We propose a modular extension of backpropagation for the computation of block-diagonal approximations to various curvature matrices of the training objective (in particular, the Hessian, generalized Gauss-Newton, and positive-curvature Hessian).
no code implementations • 22 Jan 2019 • Georgios Arvanitidis, Søren Hauberg, Philipp Hennig, Michael Schober
We propose a fast, simple and robust algorithm for computing shortest paths and distances on Riemannian manifolds learned from data.
1 code implementation • 8 Oct 2018 • Filip Tronarp, Hans Kersting, Simo Särkkä, Philipp Hennig
We formulate probabilistic numerical approximations to solutions of ordinary differential equations (ODEs) as problems in Gaussian process (GP) regression with non-linear measurement functions.
no code implementations • 25 Jul 2018 • Hans Kersting, T. J. Sullivan, Philipp Hennig
A recently-introduced class of probabilistic (uncertainty-aware) solvers for ordinary differential equations (ODEs) applies Gaussian (Kalman) filtering to initial value problems.
no code implementations • 6 Jul 2018 • Motonobu Kanagawa, Philipp Hennig, Dino Sejdinovic, Bharath K. Sriperumbudur
This paper is an attempt to bridge the conceptual gaps between researchers working on the two widely used approaches based on positive definite kernels: Bayesian learning or inference using Gaussian processes on the one side, and frequentist kernel methods based on reproducing kernel Hilbert spaces on the other.
no code implementations • 25 Sep 2017 • Emilia Magnani, Hans Kersting, Michael Schober, Philipp Hennig
Recently there has been increasing interest in probabilistic solvers for ordinary differential equations (ODEs) that return full probability measures, instead of point estimates, over the solution and can incorporate uncertainty over the ODE at hand, e. g. if the vector field or the initial value is only approximately known or evaluable.
no code implementations • 20 Sep 2017 • Alonso Marco, Philipp Hennig, Stefan Schaal, Sebastian Trimpe
Finding optimal feedback controllers for nonlinear dynamic systems from data is hard.
no code implementations • 30 Jun 2017 • Paul K. Rubenstein, Ilya Tolstikhin, Philipp Hennig, Bernhard Schoelkopf
We consider the problem of learning the functions computing children from parents in a Structural Causal Model once the underlying causal graph has been identified.
no code implementations • 1 Jun 2017 • Filip de Roos, Philipp Hennig
To alleviate this problem, several linear-time approximations, such as spectral and inducing-point methods, have been suggested and are now in wide use.
2 code implementations • ICML 2018 • Lukas Balles, Philipp Hennig
The ADAM optimizer is exceedingly popular in the deep learning community.
1 code implementation • NeurIPS 2015 • Maren Mahsereci, Philipp Hennig
In deterministic optimization, line searches are a standard tool ensuring stability and efficiency.
no code implementations • 28 Mar 2017 • Maren Mahsereci, Lukas Balles, Christoph Lassner, Philipp Hennig
Early stopping is a widely used technique to prevent poor generalization performance when training an over-expressive model by means of gradient-based optimization.
no code implementations • 3 Mar 2017 • Alonso Marco, Felix Berkenkamp, Philipp Hennig, Angela P. Schoellig, Andreas Krause, Stefan Schaal, Sebastian Trimpe
In practice, the parameters of control policies are often tuned manually.
1 code implementation • 15 Dec 2016 • Lukas Balles, Javier Romero, Philipp Hennig
The batch size significantly influences the behavior of the stochastic optimization algorithm, though, since it determines the variance of the gradient estimates.
1 code implementation • 17 Oct 2016 • Michael Schober, Simo Särkkä, Philipp Hennig
Like many numerical methods, solvers for initial value problems (IVPs) on ordinary differential equations estimate an analytically intractable quantity, using the results of tractable computations as inputs.
no code implementations • 22 Sep 2016 • Philipp Hennig, Roman Garnett
Determinantal point processes (DPPs) are an important concept in random matrix theory and combinatorics.
1 code implementation • 23 May 2016 • Aaron Klein, Stefan Falkner, Simon Bartels, Philipp Hennig, Frank Hutter
Bayesian optimization has become a successful tool for hyperparameter optimization of machine learning algorithms, such as support vector machines or deep neural networks.
no code implementations • 11 May 2016 • Hans Kersting, Philipp Hennig
There is resurging interest, in statistics and machine learning, in solvers for ordinary differential equations (ODEs) that return probability measures instead of point estimates.
no code implementations • 6 May 2016 • Alonso Marco, Philipp Hennig, Jeannette Bohg, Stefan Schaal, Sebastian Trimpe
With this framework, an initial set of controller gains is automatically improved according to a pre-defined performance objective evaluated from experimental data.
no code implementations • 13 Oct 2015 • Edgar D. Klenske, Philipp Hennig
Control of non-episodic, finite-horizon dynamical systems with uncertain dynamics poses a tough and elementary case of the exploration-exploitation trade-off.
no code implementations • 3 Jun 2015 • Philipp Hennig, Michael A. Osborne, Mark Girolami
We deliver a call to arms for probabilistic numerical methods: algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations.
1 code implementation • 29 May 2015 • Javier González, Zhenwen Dai, Philipp Hennig, Neil D. Lawrence
The approach assumes that the function of interest, $f$, is a Lipschitz continuous function.
1 code implementation • NeurIPS 2015 • Maren Mahsereci, Philipp Hennig
In deterministic optimization, line searches are a standard tool ensuring stability and efficiency.
no code implementations • NeurIPS 2014 • Franziska Meier, Philipp Hennig, Stefan Schaal
Locally weighted regression (LWR) was created as a nonparametric method that can approximate a wide range of functions, is computationally efficient, and can learn continually from very large amounts of incrementally collected data.
no code implementations • NeurIPS 2014 • Tom Gunter, Michael A. Osborne, Roman Garnett, Philipp Hennig, Stephen J. Roberts
We propose a novel sampling framework for inference in probabilistic models: an active learning approach that converges more quickly (in wall-clock time) than Markov chain Monte Carlo (MCMC) benchmarks.
no code implementations • NeurIPS 2014 • Michael Schober, David Duvenaud, Philipp Hennig
We construct a family of probabilistic numerical methods that instead return a Gauss-Markov process defining a probability distribution over the ODE solution.
no code implementations • 10 Feb 2014 • Philipp Hennig
This manuscript proposes a probabilistic framework for algorithms that iteratively solve unconstrained linear problems $Bx = b$ with positive definite $B$ for $x$.
no code implementations • 4 Feb 2014 • Franziska Meier, Philipp Hennig, Stefan Schaal
Locally weighted regression was created as a nonparametric learning method that is computationally efficient, can learn from very large amounts of data and add data incrementally.
no code implementations • 24 Oct 2013 • Roman Garnett, Michael A. Osborne, Philipp Hennig
We propose an active learning method for discovering low-dimensional structure in high-dimensional Gaussian process (GP) tasks.
no code implementations • 3 Jun 2013 • Philipp Hennig, Søren Hauberg
We study a probabilistic numerical method for the solution of both boundary and initial value problems that returns a joint Gaussian process posterior over the solution.
no code implementations • NeurIPS 2013 • David Lopez-Paz, Philipp Hennig, Bernhard Schölkopf
We introduce the Randomized Dependence Coefficient (RDC), a measure of non-linear dependence between random variables of arbitrary dimension based on the Hirschfeld-Gebelein-R\'enyi Maximum Correlation Coefficient.
no code implementations • NeurIPS 2011 • Philipp Hennig
The exploration-exploitation trade-off is among the central challenges of reinforcement learning.
no code implementations • 29 Nov 2011 • John P. Cunningham, Philipp Hennig, Simon Lacoste-Julien
We consider these unexpected results empirically and theoretically, both for the problem of Gaussian probabilities and for EP more generally.