Search Results for author: Richard Kurle

Found 11 papers, 1 papers with code

Intrinsic Anomaly Detection for Multi-Variate Time Series

no code implementations29 Jun 2022 Stephan Rabanser, Tim Januschowski, Kashif Rasul, Oliver Borchert, Richard Kurle, Jan Gasthaus, Michael Bohlke-Schneider, Nicolas Papernot, Valentin Flunkert

We introduce a novel, practically relevant variation of the anomaly detection problem in multi-variate time series: intrinsic anomaly detection.

Anomaly Detection Navigate +3

Latent Matters: Learning Deep State-Space Models

no code implementations NeurIPS 2021 Alexej Klushyn, Richard Kurle, Maximilian Soelch, Botond Cseke, Patrick van der Smagt

Our results show that the constrained optimisation framework significantly improves system identification and prediction accuracy on the example of established state-of-the-art DSSMs.

Variational Inference

Deep Explicit Duration Switching Models for Time Series

1 code implementation NeurIPS 2021 Abdul Fatir Ansari, Konstantinos Benidis, Richard Kurle, Ali Caner Turkmen, Harold Soh, Alexander J. Smola, Yuyang Wang, Tim Januschowski

We propose the Recurrent Explicit Duration Switching Dynamical System (RED-SDS), a flexible model that is capable of identifying both state- and time-dependent switching dynamics.

Time Series Time Series Analysis

Context-invariant, multi-variate time series representations

no code implementations29 Sep 2021 Stephan Rabanser, Tim Januschowski, Kashif Rasul, Oliver Borchert, Richard Kurle, Jan Gasthaus, Michael Bohlke-Schneider, Nicolas Papernot, Valentin Flunkert

Modern time series corpora, in particular those coming from sensor-based data, exhibit characteristics that have so far not been adequately addressed in the literature on representation learning for time series.

Contrastive Learning Representation Learning +2

Deep Rao-Blackwellised Particle Filters for Time Series Forecasting

no code implementations NeurIPS 2020 Richard Kurle, Syama Sundar Rangapuram, Emmanuel de Bézenac, Stephan Günnemann, Jan Gasthaus

We propose a Monte Carlo objective that leverages the conditional linearity by computing the corresponding conditional expectations in closed-form and a suitable proposal distribution that is factorised similarly to the optimal proposal distribution.

Time Series Time Series Forecasting

Continual Learning with Bayesian Neural Networks for Non-Stationary Data

no code implementations ICLR 2020 Richard Kurle, Botond Cseke, Alexej Klushyn, Patrick van der Smagt, Stephan Günnemann

We represent the posterior approximation of the network weights by a diagonal Gaussian distribution and a complementary memory of raw data.

Continual Learning

Learning Hierarchical Priors in VAEs

no code implementations NeurIPS 2019 Alexej Klushyn, Nutan Chen, Richard Kurle, Botond Cseke, Patrick van der Smagt

We propose to learn a hierarchical prior in the context of variational autoencoders to avoid the over-regularisation resulting from a standard normal prior distribution.

Multi-Source Neural Variational Inference

no code implementations11 Nov 2018 Richard Kurle, Stephan Günnemann, Patrick van der Smagt

Learning from multiple sources of information is an important problem in machine-learning research.

Variational Inference

Metrics for Deep Generative Models

no code implementations3 Nov 2017 Nutan Chen, Alexej Klushyn, Richard Kurle, Xueyan Jiang, Justin Bayer, Patrick van der Smagt

Neural samplers such as variational autoencoders (VAEs) or generative adversarial networks (GANs) approximate distributions by transforming samples from a simple random source---the latent space---to samples from a more complex distribution represented by a dataset.

Cannot find the paper you are looking for? You can Submit a new open access paper.