1 code implementation • 29 Jan 2023 • Patric Bonnier, Harald Oberhauser, Zoltán Szabó
In $\mathbb R^d$, it is well-known that cumulants provide an alternative to moments that can achieve the same goals with numerous benefits such as lower variance estimators.
no code implementations • 27 Jan 2023 • Masaki Adachi, Satoshi Hayakawa, Saad Hamid, Martin Jørgensen, Harald Oberhauser, Micheal A. Osborne
Batch Bayesian optimisation (BO) has shown to be a sample-efficient method of performing optimisation where expensive-to-evaluate objective functions can be queried in parallel.
no code implementations • 23 Jan 2023 • Satoshi Hayakawa, Harald Oberhauser, Terry Lyons
We analyze the Nystr\"om approximation of a positive definite kernel associated with a probability measure.
2 code implementations • 9 Jun 2022 • Masaki Adachi, Satoshi Hayakawa, Martin Jørgensen, Harald Oberhauser, Michael A. Osborne
Empirically, we find that our approach significantly outperforms the sampling efficiency of both state-of-the-art BQ techniques and Nested Sampling in various real-world datasets, including lithium-ion battery analytics.
1 code implementation • 27 May 2022 • Csaba Toth, Darrick Lee, Celia Hacker, Harald Oberhauser
This results in a novel tensor-valued graph operator, which we call the hypo-elliptic graph Laplacian.
no code implementations • 12 Oct 2021 • Uzu Lim, Harald Oberhauser, Vidit Nanda
Consider a set of points sampled independently near a smooth compact submanifold of Euclidean space.
1 code implementation • 20 Jul 2021 • Satoshi Hayakawa, Harald Oberhauser, Terry Lyons
We study kernel quadrature rules with convex weights.
1 code implementation • 6 Feb 2021 • Patrick Kidger, James Foster, Xuechen Li, Harald Oberhauser, Terry Lyons
Stochastic differential equations (SDEs) are a staple of mathematical modelling of temporal dynamics.
1 code implementation • 4 Feb 2021 • Alexander Schell, Harald Oberhauser
We study the classical problem of recovering a multidimensional source signal from observations of nonlinear mixtures of this signal.
no code implementations • 1 Jan 2021 • Patrick Kidger, James Foster, Xuechen Li, Harald Oberhauser, Terry Lyons
Several authors have introduced \emph{Neural Stochastic Differential Equations} (Neural SDEs), often involving complex theory with various limitations.
1 code implementation • ICLR 2021 • Csaba Toth, Patric Bonnier, Harald Oberhauser
Sequential data such as time series, video, or text can be challenging to analyse as the ordered structure gives rise to complex dependencies.
Ranked #1 on
Time Series Classification
on KickvsPunch
1 code implementation • 2 Jun 2020 • Francesco Cosentino, Harald Oberhauser, Alessandro Abate
Various flavours of Stochastic Gradient Descent (SGD) replace the expensive summation that computes the full gradient by approximating it with a small sum over a randomly selected subsample of the data set that in turn suffers from a high variance.
1 code implementation • NeurIPS 2020 • Francesco Cosentino, Harald Oberhauser, Alessandro Abate
Given a discrete probability measure supported on $N$ atoms and a set of $n$ real-valued functions, there exists a probability measure that is supported on a subset of $n+1$ of the original $N$ atoms and has the same mean when integrated against each of the $n$ functions.
1 code implementation • ICML 2020 • Csaba Toth, Harald Oberhauser
We develop a Bayesian approach to learning from sequential data by using Gaussian processes (GPs) with so-called signature kernels as covariance functions.
Ranked #1 on
Time Series Classification
on DigitShapes
1 code implementation • 25 Oct 2018 • Ilya Chevyrev, Harald Oberhauser
This allows us to derive a metric of maximum mean discrepancy type for laws of stochastic processes and study the topology it induces on the space of laws of stochastic processes.
no code implementations • 1 Jun 2018 • Ilya Chevyrev, Vidit Nanda, Harald Oberhauser
We introduce a new feature map for barcodes that arise in persistent homology computation.
no code implementations • 2 Jan 2018 • Frithjof Gressmann, Franz J. Király, Bilal Mateen, Harald Oberhauser
Predictive modelling and supervised learning are central to modern data science.
no code implementations • 31 Aug 2017 • Terry Lyons, Harald Oberhauser
We introduce features for massive data streams.
no code implementations • 29 Jan 2016 • Franz J. Király, Harald Oberhauser
We present a novel framework for kernel learning with sequential data of any kind, such as time series, sequences of graphs, or strings.