Search Results for author: Maja Rudolph

Found 9 papers, 2 papers with code

Modeling Irregular Time Series with Continuous Recurrent Units

no code implementations22 Nov 2021 Mona Schirmer, Mazin Eltayeb, Stefan Lessmann, Maja Rudolph

In an empirical study, we show that the CRU can better interpolate irregular time series than neural ordinary differential equation (neural ODE)-based models.

Irregular Time Series Time Series

Switching Recurrent Kalman Networks

no code implementations16 Nov 2021 Giao Nguyen-Quynh, Philipp Becker, Chen Qiu, Maja Rudolph, Gerhard Neumann

In addition, driving data can often be multimodal in distribution, meaning that there are distinct predictions that are likely, but averaging can hurt model performance.

Autonomous Driving Time Series

Neural Transformation Learning for Deep Anomaly Detection Beyond Images

no code implementations30 Mar 2021 Chen Qiu, Timo Pfrommer, Marius Kloft, Stephan Mandt, Maja Rudolph

Data transformations (e. g. rotations, reflections, and cropping) play an important role in self-supervised learning.

Anomaly Detection Self-Supervised Learning +1

Variational Dynamic Mixtures

no code implementations20 Oct 2020 Chen Qiu, Stephan Mandt, Maja Rudolph

Deep probabilistic time series forecasting models have become an integral part of machine learning.

Probabilistic Time Series Forecasting Time Series

Word2net: Deep Representations of Language

no code implementations ICLR 2018 Maja Rudolph, Francisco Ruiz, David Blei

Most embedding methods rely on a log-bilinear model to predict the occurrence of a word in a context of other words.

Word Embeddings

Structured Embedding Models for Grouped Data

1 code implementation NeurIPS 2017 Maja Rudolph, Francisco Ruiz, Susan Athey, David Blei

Here we develop structured exponential family embeddings (S-EFE), a method for discovering embeddings that vary across related groups of data.

Word Embeddings

Dynamic Bernoulli Embeddings for Language Evolution

1 code implementation23 Mar 2017 Maja Rudolph, David Blei

Word embeddings are a powerful approach for unsupervised analysis of language.

Word Embeddings

Cannot find the paper you are looking for? You can Submit a new open access paper.