Search Results for author: Harald Oberhauser

Found 23 papers, 15 papers with code

Neural SDEs as Infinite-Dimensional GANs

1 code implementation6 Feb 2021 Patrick Kidger, James Foster, Xuechen Li, Harald Oberhauser, Terry Lyons

Stochastic differential equations (SDEs) are a staple of mathematical modelling of temporal dynamics.

Time Series Time Series Analysis

The Signature Kernel

2 code implementations8 May 2023 Darrick Lee, Harald Oberhauser

The signature kernel is a positive definite kernel for sequential data.

Bayesian Learning from Sequential Data using Gaussian Processes with Signature Covariances

1 code implementation ICML 2020 Csaba Toth, Harald Oberhauser

We develop a Bayesian approach to learning from sequential data by using Gaussian processes (GPs) with so-called signature kernels as covariance functions.

Gaussian Processes General Classification +3

Seq2Tens: An Efficient Representation of Sequences by Low-Rank Tensor Projections

1 code implementation ICLR 2021 Csaba Toth, Patric Bonnier, Harald Oberhauser

Sequential data such as time series, video, or text can be challenging to analyse as the ordered structure gives rise to complex dependencies.

Imputation Time Series +2

Signature moments to characterize laws of stochastic processes

1 code implementation25 Oct 2018 Ilya Chevyrev, Harald Oberhauser

This allows us to derive a metric of maximum mean discrepancy type for laws of stochastic processes and study the topology it induces on the space of laws of stochastic processes.

Fast Bayesian Inference with Batch Bayesian Quadrature via Kernel Recombination

2 code implementations9 Jun 2022 Masaki Adachi, Satoshi Hayakawa, Martin Jørgensen, Harald Oberhauser, Michael A. Osborne

Empirically, we find that our approach significantly outperforms the sampling efficiency of both state-of-the-art BQ techniques and Nested Sampling in various real-world datasets, including lithium-ion battery analytics.

Bayesian Inference Numerical Integration

SOBER: Highly Parallel Bayesian Optimization and Bayesian Quadrature over Discrete and Mixed Spaces

1 code implementation27 Jan 2023 Masaki Adachi, Satoshi Hayakawa, Saad Hamid, Martin Jørgensen, Harald Oberhauser, Micheal A. Osborne

Batch Bayesian optimisation and Bayesian quadrature have been shown to be sample-efficient methods of performing optimisation and quadrature where expensive-to-evaluate objective functions can be queried in parallel.

Drug Discovery

Random Fourier Signature Features

1 code implementation20 Nov 2023 Csaba Toth, Harald Oberhauser, Zoltan Szabo

Tensor algebras give rise to one of the most powerful measures of similarity for sequences of arbitrary length called the signature kernel accompanied with attractive theoretical guarantees from stochastic analysis.

Time Series

Nonlinear Independent Component Analysis for Discrete-Time and Continuous-Time Signals

1 code implementation4 Feb 2021 Alexander Schell, Harald Oberhauser

We study the classical problem of recovering a multidimensional source signal from observations of nonlinear mixtures of this signal.

blind source separation Contrastive Learning +2

Sampling-based Nyström Approximation and Kernel Quadrature

1 code implementation23 Jan 2023 Satoshi Hayakawa, Harald Oberhauser, Terry Lyons

We analyze the Nystr\"om approximation of a positive definite kernel associated with a probability measure.

Learning Theory

Capturing Graphs with Hypo-Elliptic Diffusions

1 code implementation27 May 2022 Csaba Toth, Darrick Lee, Celia Hacker, Harald Oberhauser

This results in a novel tensor-valued graph operator, which we call the hypo-elliptic graph Laplacian.

A Randomized Algorithm to Reduce the Support of Discrete Measures

1 code implementation NeurIPS 2020 Francesco Cosentino, Harald Oberhauser, Alessandro Abate

Given a discrete probability measure supported on $N$ atoms and a set of $n$ real-valued functions, there exists a probability measure that is supported on a subset of $n+1$ of the original $N$ atoms and has the same mean when integrated against each of the $n$ functions.

Sketching the order of events

no code implementations31 Aug 2017 Terry Lyons, Harald Oberhauser

We introduce features for massive data streams.

Kernels for sequentially ordered data

no code implementations29 Jan 2016 Franz J. Király, Harald Oberhauser

We present a novel framework for kernel learning with sequential data of any kind, such as time series, sequences of graphs, or strings.

Time Series Time Series Analysis

Carathéodory Sampling for Stochastic Gradient Descent

1 code implementation2 Jun 2020 Francesco Cosentino, Harald Oberhauser, Alessandro Abate

Various flavours of Stochastic Gradient Descent (SGD) replace the expensive summation that computes the full gradient by approximating it with a small sum over a randomly selected subsample of the data set that in turn suffers from a high variance.

Neural SDEs Made Easy: SDEs are Infinite-Dimensional GANs

no code implementations1 Jan 2021 Patrick Kidger, James Foster, Xuechen Li, Harald Oberhauser, Terry Lyons

Several authors have introduced \emph{Neural Stochastic Differential Equations} (Neural SDEs), often involving complex theory with various limitations.

Tangent Space and Dimension Estimation with the Wasserstein Distance

no code implementations12 Oct 2021 Uzu Lim, Harald Oberhauser, Vidit Nanda

Consider a set of points sampled independently near a smooth compact submanifold of Euclidean space.

HADES: Fast Singularity Detection with Local Measure Comparison

no code implementations7 Nov 2023 Uzu Lim, Harald Oberhauser, Vidit Nanda

We introduce Hades, an unsupervised algorithm to detect singularities in data.

A Quadrature Approach for General-Purpose Batch Bayesian Optimization via Probabilistic Lifting

no code implementations18 Apr 2024 Masaki Adachi, Satoshi Hayakawa, Martin Jørgensen, Saad Hamid, Harald Oberhauser, Michael A. Osborne

Parallelisation in Bayesian optimisation is a common strategy but faces several challenges: the need for flexibility in acquisition functions and kernel choices, flexibility dealing with discrete and continuous variables simultaneously, model misspecification, and lastly fast massive parallelisation.

Cannot find the paper you are looking for? You can Submit a new open access paper.