Search Results for author: Philipp Hennig

Found 97 papers, 46 papers with code

Diffusion Tempering Improves Parameter Estimation with Probabilistic Integrators for Ordinary Differential Equations

1 code implementation19 Feb 2024 Jonas Beck, Nathanael Bosch, Michael Deistler, Kyra L. Kadhim, Jakob H. Macke, Philipp Hennig, Philipp Berens

Ordinary differential equations (ODEs) are widely used to describe dynamical systems in science, but identifying parameters that explain experimental measurements is challenging.

Sample Path Regularity of Gaussian Processes from the Covariance Kernel

no code implementations22 Dec 2023 Nathaël Da Costa, Marvin Pförtner, Lancelot Da Costa, Philipp Hennig

While applications of GPs are myriad, a comprehensive understanding of GP sample paths, i. e. the function spaces over which they define a probability measure, is lacking.

Gaussian Processes

Kronecker-Factored Approximate Curvature for Modern Neural Network Architectures

no code implementations NeurIPS 2023 Runa Eschenhagen, Alexander Immer, Richard E. Turner, Frank Schneider, Philipp Hennig

In this work, we identify two different settings of linear weight-sharing layers which motivate two flavours of K-FAC -- $\textit{expand}$ and $\textit{reduce}$.

Accelerating Generalized Linear Models by Trading off Computation for Uncertainty

no code implementations31 Oct 2023 Lukas Tatzel, Jonathan Wenger, Frank Schneider, Philipp Hennig

Bayesian Generalized Linear Models (GLMs) define a flexible probabilistic framework to model categorical, ordinal and continuous data, and are widely used in practice.

Parallel-in-Time Probabilistic Numerical ODE Solvers

1 code implementation2 Oct 2023 Nathanael Bosch, Adrien Corenflos, Fatemeh Yaghoobi, Filip Tronarp, Philipp Hennig, Simo Särkkä

Probabilistic numerical solvers for ordinary differential equations (ODEs) treat the numerical simulation of dynamical systems as problems of Bayesian state estimation.

The Rank-Reduced Kalman Filter: Approximate Dynamical-Low-Rank Filtering In High Dimensions

2 code implementations NeurIPS 2023 Jonathan Schmidt, Philipp Hennig, Jörg Nick, Filip Tronarp

In this paper, we propose a novel approximate Gaussian filtering and smoothing method which propagates low-rank approximations of the covariance matrices.

Dimensionality Reduction

Probabilistic Exponential Integrators

1 code implementation NeurIPS 2023 Nathanael Bosch, Philipp Hennig, Filip Tronarp

However, like standard solvers, they suffer performance penalties for certain stiff systems, where small steps are required not for reasons of numerical accuracy but for the sake of stability.

Uncertainty Quantification

Uncertainty and Structure in Neural Ordinary Differential Equations

no code implementations22 May 2023 Katharina Ott, Michael Tiemann, Philipp Hennig

As a first contribution, we show that basic and lightweight Bayesian deep learning techniques like the Laplace approximation can be applied to neural ODEs to yield structured and meaningful uncertainty quantification.

Uncertainty Quantification

Bayesian Numerical Integration with Neural Networks

1 code implementation22 May 2023 Katharina Ott, Michael Tiemann, Philipp Hennig, François-Xavier Briol

Bayesian probabilistic numerical methods for numerical integration offer significant advantages over their non-Bayesian counterparts: they can encode prior information about the integrand, and can quantify uncertainty over estimates of an integral.

Numerical Integration

Physics-Informed Gaussian Process Regression Generalizes Linear PDE Solvers

1 code implementation23 Dec 2022 Marvin Pförtner, Ingo Steinwart, Philipp Hennig, Jonathan Wenger

Crucially, this probabilistic viewpoint allows to (1) quantify the inherent discretization error; (2) propagate uncertainty about the model parameters to the solution; and (3) condition on noisy measurements.

Bayesian Inference regression

Optimistic Optimization of Gaussian Process Samples

no code implementations2 Sep 2022 Julia Grosse, Cheng Zhang, Philipp Hennig

Bayesian optimization is a popular formalism for global optimization, but its computational costs limit it to expensive-to-evaluate functions.

Bayesian Optimization Computational Efficiency

Approximate Bayesian Neural Operators: Uncertainty Quantification for Parametric PDEs

no code implementations2 Aug 2022 Emilia Magnani, Nicholas Krämer, Runa Eschenhagen, Lorenzo Rosasco, Philipp Hennig

Neural operators are a type of deep architecture that learns to solve (i. e. learns the nonlinear solution operator of) partial differential equations (PDEs).

Gaussian Processes Uncertainty Quantification

Posterior and Computational Uncertainty in Gaussian Processes

1 code implementation30 May 2022 Jonathan Wenger, Geoff Pleiss, Marvin Pförtner, Philipp Hennig, John P. Cunningham

For any method in this class, we prove (i) convergence of its posterior mean in the associated RKHS, (ii) decomposability of its combined posterior covariance into mathematical and computational covariances, and (iii) that the combined variance is a tight worst-case bound for the squared error between the method's posterior mean and the latent function.

Gaussian Processes

Posterior Refinement Improves Sample Efficiency in Bayesian Neural Networks

1 code implementation20 May 2022 Agustinus Kristiadi, Runa Eschenhagen, Philipp Hennig

We show that the resulting posterior approximation is competitive with even the gold-standard full-batch Hamiltonian Monte Carlo.

Wasserstein t-SNE

2 code implementations16 May 2022 Fynn Bachmann, Philipp Hennig, Dmitry Kobak

We use t-SNE to construct 2D embeddings of the units, based on the matrix of pairwise Wasserstein distances between them.

Fenrir: Physics-Enhanced Regression for Initial Value Problems

1 code implementation2 Feb 2022 Filip Tronarp, Nathanael Bosch, Philipp Hennig

We show how probabilistic numerics can be used to convert an initial value problem into a Gauss--Markov process parametrised by the dynamics of the initial value problem.

Numerical Integration regression

Linear-Time Probabilistic Solution of Boundary Value Problems

no code implementations NeurIPS 2021 Nicholas Krämer, Philipp Hennig

We propose a fast algorithm for the probabilistic solution of boundary value problems (BVPs), which are ordinary differential equations subject to boundary conditions.

Uncertainty Quantification

Probabilistic Numerical Method of Lines for Time-Dependent Partial Differential Equations

2 code implementations22 Oct 2021 Nicholas Krämer, Jonathan Schmidt, Philipp Hennig

Thereby, we extend the toolbox of probabilistic programs for differential equation simulation to PDEs.

Bayesian Inference

Probabilistic ODE Solutions in Millions of Dimensions

no code implementations22 Oct 2021 Nicholas Krämer, Nathanael Bosch, Jonathan Schmidt, Philipp Hennig

Probabilistic solvers for ordinary differential equations (ODEs) have emerged as an efficient framework for uncertainty quantification and inference on dynamical systems.

Uncertainty Quantification

Pick-and-Mix Information Operators for Probabilistic ODE Solvers

2 code implementations20 Oct 2021 Nathanael Bosch, Filip Tronarp, Philipp Hennig

Probabilistic numerical solvers for ordinary differential equations compute posterior distributions over the solution of an initial value problem via Bayesian inference.

Bayesian Inference

Preconditioning for Scalable Gaussian Process Hyperparameter Optimization

no code implementations1 Jul 2021 Jonathan Wenger, Geoff Pleiss, Philipp Hennig, John P. Cunningham, Jacob R. Gardner

While preconditioning is well understood in the context of CG, we demonstrate that it can also accelerate convergence and reduce variance of the estimates for the log-determinant and its derivative.

Gaussian Processes Hyperparameter Optimization

Laplace Redux -- Effortless Bayesian Deep Learning

3 code implementations NeurIPS 2021 Erik Daxberger, Agustinus Kristiadi, Alexander Immer, Runa Eschenhagen, Matthias Bauer, Philipp Hennig

Bayesian formulations of deep learning have been shown to have compelling theoretical properties and offer practical functional benefits, such as improved predictive uncertainty quantification and model selection.

Misconceptions Model Selection +1

Being a Bit Frequentist Improves Bayesian Neural Networks

1 code implementation18 Jun 2021 Agustinus Kristiadi, Matthias Hein, Philipp Hennig

Despite their compelling theoretical properties, Bayesian neural networks (BNNs) tend to perform worse than frequentist methods in classification-based uncertainty quantification (UQ) tasks such as out-of-distribution (OOD) detection.

Bayesian Inference Out of Distribution (OOD) Detection +1

Probabilistic DAG Search

no code implementations16 Jun 2021 Julia Grosse, Cheng Zhang, Philipp Hennig

Exciting contemporary machine learning problems have recently been phrased in the classic formalism of tree search -- most famously, the game of Go.

Decision Making feature selection +1

Linear-Time Probabilistic Solutions of Boundary Value Problems

no code implementations14 Jun 2021 Nicholas Krämer, Philipp Hennig

We propose a fast algorithm for the probabilistic solution of boundary value problems (BVPs), which are ordinary differential equations subject to boundary conditions.

Uncertainty Quantification

ViViT: Curvature access through the generalized Gauss-Newton's low-rank structure

3 code implementations4 Jun 2021 Felix Dangel, Lukas Tatzel, Philipp Hennig

Curvature in form of the Hessian or its generalized Gauss-Newton (GGN) approximation is valuable for algorithms that rely on a local model for the loss to train, compress, or explain deep networks.

Laplace Redux - Effortless Bayesian Deep Learning

no code implementations NeurIPS 2021 Erik Daxberger, Agustinus Kristiadi, Alexander Immer, Runa Eschenhagen, Matthias Bauer, Philipp Hennig

Bayesian formulations of deep learning have been shown to have compelling theoretical properties and offer practical functional benefits, such as improved predictive uncertainty quantification and model selection.

Misconceptions Model Selection +1

Informed Equation Learning

no code implementations13 May 2021 Matthias Werner, Andrej Junginger, Philipp Hennig, Georg Martius

Our system then utilizes a robust method to learn equations with atomic functions exhibiting singularities, as e. g. logarithm and division.

Laplace Matching for fast Approximate Inference in Latent Gaussian Models

1 code implementation7 May 2021 Marius Hobbhahn, Philipp Hennig

The method can be thought of as a pre-processing step which can be implemented in <5 lines of code and runs in less than a second.

Bayesian Inference Gaussian Processes +1

A Probabilistically Motivated Learning Rate Adaptation for Stochastic Optimization

no code implementations22 Feb 2021 Filip de Roos, Carl Jidling, Adrian Wills, Thomas Schön, Philipp Hennig

Machine learning practitioners invest significant manual and computational resources in finding suitable learning rates for optimization algorithms.

BIG-bench Machine Learning Stochastic Optimization

High-Dimensional Gaussian Process Inference with Derivatives

1 code implementation15 Feb 2021 Filip de Roos, Alexandra Gessner, Philipp Hennig

Although it is widely known that Gaussian processes can be conditioned on observations of the gradient, this functionality is of limited use due to the prohibitive computational cost of $\mathcal{O}(N^3 D^3)$ in data points $N$ and dimension $D$.

Gaussian Processes Vocal Bursts Intensity Prediction

Bayesian Quadrature on Riemannian Data Manifolds

1 code implementation12 Feb 2021 Christian Fröhlich, Alexandra Gessner, Philipp Hennig, Bernhard Schölkopf, Georgios Arvanitidis

Riemannian manifolds provide a principled way to model nonlinear geometric structure inherent in data.

Descending through a Crowded Valley — Benchmarking Deep Learning Optimizers

1 code implementation1 Jan 2021 Robin Marc Schmidt, Frank Schneider, Philipp Hennig

(iii) While we can not discern an optimization method clearly dominating across all tested tasks, we identify a significantly reduced subset of specific algorithms and parameter choices that generally lead to competitive results in our experiments.

Benchmarking

ResNet After All: Neural ODEs and Their Numerical Solution

no code implementations ICLR 2021 Katharina Ott, Prateek Katiyar, Philipp Hennig, Michael Tiemann

If the trained model is supposed to be a flow generated from an ODE, it should be possible to choose another numerical solver with equal or smaller numerical error without loss of performance.

valid

Stable Implementation of Probabilistic ODE Solvers

no code implementations18 Dec 2020 Nicholas Krämer, Philipp Hennig

Probabilistic solvers for ordinary differential equations (ODEs) provide efficient quantification of numerical uncertainty associated with simulation of dynamical systems.

Calibrated Adaptive Probabilistic ODE Solvers

1 code implementation15 Dec 2020 Nathanael Bosch, Philipp Hennig, Filip Tronarp

The contraction rate of this error estimate as a function of the solver's step size identifies it as a well-calibrated worst-case error, but its explicit numerical value for a certain step size is not automatically a good estimate of the explicit error.

Benchmarking Descriptive

Self-Tuning Stochastic Optimization with Curvature-Aware Gradient Filtering

no code implementations NeurIPS Workshop ICBINB 2020 Ricky T. Q. Chen, Dami Choi, Lukas Balles, David Duvenaud, Philipp Hennig

Standard first-order stochastic optimization algorithms base their updates solely on the average mini-batch gradient, and it has been shown that tracking additional quantities such as the curvature can help de-sensitize common hyperparameters.

Stochastic Optimization

Robot Learning with Crash Constraints

1 code implementation16 Oct 2020 Alonso Marco, Dominik Baumann, Majid Khadiv, Philipp Hennig, Ludovic Righetti, Sebastian Trimpe

We consider failing behaviors as those that violate a constraint and address the problem of learning with crash constraints, where no data is obtained upon constraint violation.

Bayesian Optimization

Learnable Uncertainty under Laplace Approximations

1 code implementation6 Oct 2020 Agustinus Kristiadi, Matthias Hein, Philipp Hennig

Laplace approximations are classic, computationally lightweight means for constructing Bayesian neural networks (BNNs).

Uncertainty Quantification

An Infinite-Feature Extension for Bayesian ReLU Nets That Fixes Their Asymptotic Overconfidence

no code implementations NeurIPS 2021 Agustinus Kristiadi, Matthias Hein, Philipp Hennig

We extend finite ReLU BNNs with infinite ReLU features via the GP and show that the resulting model is asymptotically maximally uncertain far away from the data while the BNNs' predictive power is unaffected near the data.

Multi-class Classification

Fixing Asymptotic Uncertainty of Bayesian Neural Networks with Infinite ReLU Features

no code implementations28 Sep 2020 Agustinus Kristiadi, Matthias Hein, Philipp Hennig

However, far away from the training data, even Bayesian neural networks (BNNs) can still underestimate uncertainty and thus be overconfident.

Multi-class Classification

ResNet After All? Neural ODEs and Their Numerical Solution

1 code implementation30 Jul 2020 Katharina Ott, Prateek Katiyar, Philipp Hennig, Michael Tiemann

If the trained model is supposed to be a flow generated from an ODE, it should be possible to choose another numerical solver with equal or smaller numerical error without loss of performance.

valid

Descending through a Crowded Valley - Benchmarking Deep Learning Optimizers

1 code implementation3 Jul 2020 Robin M. Schmidt, Frank Schneider, Philipp Hennig

Choosing the optimizer is considered to be among the most crucial design decisions in deep learning, and it is not an easy one.

Benchmarking

Bayesian ODE Solvers: The Maximum A Posteriori Estimate

no code implementations1 Apr 2020 Filip Tronarp, Simo Sarkka, Philipp Hennig

The remaining three classes are termed explicit, semi-implicit, and implicit, which are in similarity with the classical notions corresponding to conditions on the vector field, under which the filter update produces a local maximum a posteriori estimate.

Bayesian Inference

Fast Predictive Uncertainty for Classification with Bayesian Deep Networks

1 code implementation2 Mar 2020 Marius Hobbhahn, Agustinus Kristiadi, Philipp Hennig

We reconsider old work (Laplace Bridge) to construct a Dirichlet approximation of this softmax output distribution, which yields an analytic map between Gaussian distributions in logit space and Dirichlet distributions (the conjugate prior to the Categorical distribution) in the output space.

Classification General Classification

Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks

1 code implementation ICML 2020 Agustinus Kristiadi, Matthias Hein, Philipp Hennig

These theoretical results validate the usage of last-layer Bayesian approximation and motivate a range of a fidelity-cost trade-off.

Bayesian Inference

Differentiable Likelihoods for Fast Inversion of 'Likelihood-Free' Dynamical Systems

no code implementations ICML 2020 Hans Kersting, Nicholas Krämer, Martin Schiegg, Christian Daniel, Michael Tiemann, Philipp Hennig

To address this shortcoming, we employ Gaussian ODE filtering (a probabilistic numerical method for ODEs) to construct a local Gaussian approximation to the likelihood.

BackPACK: Packing more into backprop

1 code implementation ICLR 2020 Felix Dangel, Frederik Kunstner, Philipp Hennig

Automatic differentiation frameworks are optimized for exactly one thing: computing the average mini-batch gradient.

Conjugate Gradients for Kernel Machines

no code implementations14 Nov 2019 Simon Bartels, Philipp Hennig

Regularized least-squares (kernel-ridge / Gaussian process) regression is a fundamental algorithm of statistics and machine learning.

BIG-bench Machine Learning regression

Integrals over Gaussians under Linear Domain Constraints

2 code implementations21 Oct 2019 Alexandra Gessner, Oindrila Kanjilal, Philipp Hennig

Integrals of linearly constrained multivariate Gaussian densities are a frequent problem in machine learning and statistics, arising in tasks like generalized linear models and Bayesian optimization.

Bayesian Optimization

Classified Regression for Bayesian Optimization: Robot Learning with Unknown Penalties

no code implementations24 Jul 2019 Alonso Marco, Dominik Baumann, Philipp Hennig, Sebastian Trimpe

Learning robot controllers by minimizing a black-box objective cost using Bayesian optimization (BO) can be time-consuming and challenging.

Bayesian Optimization regression

Uncertainty Estimates for Ordinal Embeddings

no code implementations27 Jun 2019 Michael Lohaus, Philipp Hennig, Ulrike Von Luxburg

To investigate objects without a describable notion of distance, one can gather ordinal information by asking triplet comparisons of the form "Is object $x$ closer to $y$ or is $x$ closer to $z$?"

Limitations of the Empirical Fisher Approximation for Natural Gradient Descent

1 code implementation NeurIPS 2019 Frederik Kunstner, Lukas Balles, Philipp Hennig

Natural gradient descent, which preconditions a gradient descent update with the Fisher information matrix of the underlying statistical model, is a way to capture partial second-order information.

Second-order methods

Convergence Guarantees for Adaptive Bayesian Quadrature Methods

no code implementations NeurIPS 2019 Motonobu Kanagawa, Philipp Hennig

Adaptive Bayesian quadrature (ABQ) is a powerful approach to numerical integration that empirically compares favorably with Monte Carlo integration on problems of medium dimensionality (where non-adaptive quadrature is not competitive).

Numerical Integration

DeepOBS: A Deep Learning Optimizer Benchmark Suite

1 code implementation ICLR 2019 Frank Schneider, Lukas Balles, Philipp Hennig

We suggest routines and benchmarks for stochastic optimization, with special focus on the unique aspects of deep learning, such as stochasticity, tunability and generalization.

Benchmarking Image Classification +1

Active Probabilistic Inference on Matrices for Pre-Conditioning in Stochastic Optimization

1 code implementation20 Feb 2019 Filip de Roos, Philipp Hennig

Pre-conditioning is a well-known concept that can significantly improve the convergence of optimization algorithms.

Stochastic Optimization

Modular Block-diagonal Curvature Approximations for Feedforward Architectures

1 code implementation5 Feb 2019 Felix Dangel, Stefan Harmeling, Philipp Hennig

We propose a modular extension of backpropagation for the computation of block-diagonal approximations to various curvature matrices of the training objective (in particular, the Hessian, generalized Gauss-Newton, and positive-curvature Hessian).

BIG-bench Machine Learning

Fast and Robust Shortest Paths on Manifolds Learned from Data

no code implementations22 Jan 2019 Georgios Arvanitidis, Søren Hauberg, Philipp Hennig, Michael Schober

We propose a fast, simple and robust algorithm for computing shortest paths and distances on Riemannian manifolds learned from data.

Metric Learning

Probabilistic Solutions To Ordinary Differential Equations As Non-Linear Bayesian Filtering: A New Perspective

1 code implementation8 Oct 2018 Filip Tronarp, Hans Kersting, Simo Särkkä, Philipp Hennig

We formulate probabilistic numerical approximations to solutions of ordinary differential equations (ODEs) as problems in Gaussian process (GP) regression with non-linear measurement functions.

Convergence Rates of Gaussian ODE Filters

no code implementations25 Jul 2018 Hans Kersting, T. J. Sullivan, Philipp Hennig

A recently-introduced class of probabilistic (uncertainty-aware) solvers for ordinary differential equations (ODEs) applies Gaussian (Kalman) filtering to initial value problems.

Gaussian Processes and Kernel Methods: A Review on Connections and Equivalences

no code implementations6 Jul 2018 Motonobu Kanagawa, Philipp Hennig, Dino Sejdinovic, Bharath K. Sriperumbudur

This paper is an attempt to bridge the conceptual gaps between researchers working on the two widely used approaches based on positive definite kernels: Bayesian learning or inference using Gaussian processes on the one side, and frequentist kernel methods based on reproducing kernel Hilbert spaces on the other.

Gaussian Processes regression

Bayesian Filtering for ODEs with Bounded Derivatives

no code implementations25 Sep 2017 Emilia Magnani, Hans Kersting, Michael Schober, Philipp Hennig

Recently there has been increasing interest in probabilistic solvers for ordinary differential equations (ODEs) that return full probability measures, instead of point estimates, over the solution and can incorporate uncertainty over the ODE at hand, e. g. if the vector field or the initial value is only approximately known or evaluable.

Probabilistic Active Learning of Functions in Structural Causal Models

no code implementations30 Jun 2017 Paul K. Rubenstein, Ilya Tolstikhin, Philipp Hennig, Bernhard Schoelkopf

We consider the problem of learning the functions computing children from parents in a Structural Causal Model once the underlying causal graph has been identified.

Active Learning Causal Discovery

Krylov Subspace Recycling for Fast Iterative Least-Squares in Machine Learning

no code implementations1 Jun 2017 Filip de Roos, Philipp Hennig

To alleviate this problem, several linear-time approximations, such as spectral and inducing-point methods, have been suggested and are now in wide use.

BIG-bench Machine Learning Time Series +2

Early Stopping without a Validation Set

no code implementations28 Mar 2017 Maren Mahsereci, Lukas Balles, Christoph Lassner, Philipp Hennig

Early stopping is a widely used technique to prevent poor generalization performance when training an over-expressive model by means of gradient-based optimization.

regression

Coupling Adaptive Batch Sizes with Learning Rates

1 code implementation15 Dec 2016 Lukas Balles, Javier Romero, Philipp Hennig

The batch size significantly influences the behavior of the stochastic optimization algorithm, though, since it determines the variance of the gradient estimates.

Image Classification Stochastic Optimization

A probabilistic model for the numerical solution of initial value problems

1 code implementation17 Oct 2016 Michael Schober, Simo Särkkä, Philipp Hennig

Like many numerical methods, solvers for initial value problems (IVPs) on ordinary differential equations estimate an analytically intractable quantity, using the results of tractable computations as inputs.

Exact Sampling from Determinantal Point Processes

no code implementations22 Sep 2016 Philipp Hennig, Roman Garnett

Determinantal point processes (DPPs) are an important concept in random matrix theory and combinatorics.

Active Learning Bayesian Optimization +2

Fast Bayesian Optimization of Machine Learning Hyperparameters on Large Datasets

1 code implementation23 May 2016 Aaron Klein, Stefan Falkner, Simon Bartels, Philipp Hennig, Frank Hutter

Bayesian optimization has become a successful tool for hyperparameter optimization of machine learning algorithms, such as support vector machines or deep neural networks.

Bayesian Optimization BIG-bench Machine Learning +1

Active Uncertainty Calibration in Bayesian ODE Solvers

no code implementations11 May 2016 Hans Kersting, Philipp Hennig

There is resurging interest, in statistics and machine learning, in solvers for ordinary differential equations (ODEs) that return probability measures instead of point estimates.

Automatic LQR Tuning Based on Gaussian Process Global Optimization

no code implementations6 May 2016 Alonso Marco, Philipp Hennig, Jeannette Bohg, Stefan Schaal, Sebastian Trimpe

With this framework, an initial set of controller gains is automatically improved according to a pre-defined performance objective evaluated from experimental data.

Bayesian Optimization

Dual Control for Approximate Bayesian Reinforcement Learning

no code implementations13 Oct 2015 Edgar D. Klenske, Philipp Hennig

Control of non-episodic, finite-horizon dynamical systems with uncertain dynamics poses a tough and elementary case of the exploration-exploitation trade-off.

regression reinforcement-learning +1

Probabilistic Numerics and Uncertainty in Computations

no code implementations3 Jun 2015 Philipp Hennig, Michael A. Osborne, Mark Girolami

We deliver a call to arms for probabilistic numerical methods: algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations.

Management

Incremental Local Gaussian Regression

no code implementations NeurIPS 2014 Franziska Meier, Philipp Hennig, Stefan Schaal

Locally weighted regression (LWR) was created as a nonparametric method that can approximate a wide range of functions, is computationally efficient, and can learn continually from very large amounts of incrementally collected data.

regression

Sampling for Inference in Probabilistic Models with Fast Bayesian Quadrature

no code implementations NeurIPS 2014 Tom Gunter, Michael A. Osborne, Roman Garnett, Philipp Hennig, Stephen J. Roberts

We propose a novel sampling framework for inference in probabilistic models: an active learning approach that converges more quickly (in wall-clock time) than Markov chain Monte Carlo (MCMC) benchmarks.

Active Learning Numerical Integration

Probabilistic ODE Solvers with Runge-Kutta Means

no code implementations NeurIPS 2014 Michael Schober, David Duvenaud, Philipp Hennig

We construct a family of probabilistic numerical methods that instead return a Gauss-Markov process defining a probability distribution over the ODE solution.

Probabilistic Interpretation of Linear Solvers

no code implementations10 Feb 2014 Philipp Hennig

This manuscript proposes a probabilistic framework for algorithms that iteratively solve unconstrained linear problems $Bx = b$ with positive definite $B$ for $x$.

Local Gaussian Regression

no code implementations4 Feb 2014 Franziska Meier, Philipp Hennig, Stefan Schaal

Locally weighted regression was created as a nonparametric learning method that is computationally efficient, can learn from very large amounts of data and add data incrementally.

regression

Active Learning of Linear Embeddings for Gaussian Processes

no code implementations24 Oct 2013 Roman Garnett, Michael A. Osborne, Philipp Hennig

We propose an active learning method for discovering low-dimensional structure in high-dimensional Gaussian process (GP) tasks.

Active Learning Bayesian Optimization +2

Probabilistic Solutions to Differential Equations and their Application to Riemannian Statistics

no code implementations3 Jun 2013 Philipp Hennig, Søren Hauberg

We study a probabilistic numerical method for the solution of both boundary and initial value problems that returns a joint Gaussian process posterior over the solution.

The Randomized Dependence Coefficient

no code implementations NeurIPS 2013 David Lopez-Paz, Philipp Hennig, Bernhard Schölkopf

We introduce the Randomized Dependence Coefficient (RDC), a measure of non-linear dependence between random variables of arbitrary dimension based on the Hirschfeld-Gebelein-R\'enyi Maximum Correlation Coefficient.

Gaussian Probabilities and Expectation Propagation

no code implementations29 Nov 2011 John P. Cunningham, Philipp Hennig, Simon Lacoste-Julien

We consider these unexpected results empirically and theoretically, both for the problem of Gaussian probabilities and for EP more generally.

Cannot find the paper you are looking for? You can Submit a new open access paper.