Search Results for author: Maja Rudolph

Found 22 papers, 11 papers with code

Neural Transformation Learning for Deep Anomaly Detection Beyond Images

3 code implementations30 Mar 2021 Chen Qiu, Timo Pfrommer, Marius Kloft, Stephan Mandt, Maja Rudolph

Data transformations (e. g. rotations, reflections, and cropping) play an important role in self-supervised learning.

Anomaly Detection Self-Supervised Learning +2

Structured Embedding Models for Grouped Data

1 code implementation NeurIPS 2017 Maja Rudolph, Francisco Ruiz, Susan Athey, David Blei

Here we develop structured exponential family embeddings (S-EFE), a method for discovering embeddings that vary across related groups of data.

Word Embeddings

Complex-Valued Autoencoders for Object Discovery

1 code implementation5 Apr 2022 Sindy Löwe, Phillip Lippe, Maja Rudolph, Max Welling

Object-centric representations form the basis of human perception, and enable us to reason about the world and to systematically generalize to new settings.

Object Object Discovery

Latent Outlier Exposure for Anomaly Detection with Contaminated Data

1 code implementation16 Feb 2022 Chen Qiu, Aodong Li, Marius Kloft, Maja Rudolph, Stephan Mandt

We propose a strategy for training an anomaly detector in the presence of unlabeled anomalies that is compatible with a broad class of models.

Anomaly Detection Video Anomaly Detection

Efficient Integrators for Diffusion Generative Models

1 code implementation11 Oct 2023 Kushagra Pandey, Maja Rudolph, Stephan Mandt

We propose two complementary frameworks for accelerating sample generation in pre-trained models: Conjugate Integrators and Splitting Integrators.

Raising the Bar in Graph-level Anomaly Detection

1 code implementation27 May 2022 Chen Qiu, Marius Kloft, Stephan Mandt, Maja Rudolph

Graph-level anomaly detection has become a critical topic in diverse areas, such as financial fraud detection and detecting anomalous activities in social networks.

Anomaly Detection Fraud Detection +1

Dynamic Bernoulli Embeddings for Language Evolution

1 code implementation23 Mar 2017 Maja Rudolph, David Blei

Word embeddings are a powerful approach for unsupervised analysis of language.

Word Embeddings

Detecting Anomalies within Time Series using Local Neural Transformations

1 code implementation8 Feb 2022 Tim Schneider, Chen Qiu, Marius Kloft, Decky Aspandi Latif, Steffen Staab, Stephan Mandt, Maja Rudolph

We develop a new method to detect anomalies within time series, which is essential in many application domains, reaching from self-driving cars, finance, and marketing to medical diagnosis and epidemiology.

Anomaly Detection Epidemiology +5

Deep Anomaly Detection under Labeling Budget Constraints

1 code implementation15 Feb 2023 Aodong Li, Chen Qiu, Marius Kloft, Padhraic Smyth, Stephan Mandt, Maja Rudolph

Selecting informative data points for expert feedback can significantly improve the performance of anomaly detection (AD) in various contexts, such as medical diagnostics or fraud detection.

Anomaly Detection Fraud Detection

Word2net: Deep Representations of Language

no code implementations ICLR 2018 Maja Rudolph, Francisco Ruiz, David Blei

Most embedding methods rely on a log-bilinear model to predict the occurrence of a word in a context of other words.

Word Embeddings

Extending Machine Language Models toward Human-Level Language Understanding

no code implementations12 Dec 2019 James L. McClelland, Felix Hill, Maja Rudolph, Jason Baldridge, Hinrich Schütze

We take language to be a part of a system for understanding and communicating about situations.

Variational Dynamic Mixtures

no code implementations20 Oct 2020 Chen Qiu, Stephan Mandt, Maja Rudolph

Deep probabilistic time series forecasting models have become an integral part of machine learning.

Probabilistic Time Series Forecasting Time Series

Switching Recurrent Kalman Networks

no code implementations16 Nov 2021 Giao Nguyen-Quynh, Philipp Becker, Chen Qiu, Maja Rudolph, Gerhard Neumann

In addition, driving data can often be multimodal in distribution, meaning that there are distinct predictions that are likely, but averaging can hurt model performance.

Autonomous Driving Time Series +1

LoRA ensembles for large language model fine-tuning

no code implementations29 Sep 2023 Xi Wang, Laurence Aitchison, Maja Rudolph

To address these issues, we propose an ensemble approach using Low-Rank Adapters (LoRA), a parameter-efficient fine-tuning technique.

Language Modelling Large Language Model +1

Model Selection of Zero-shot Anomaly Detectors in the Absence of Labeled Validation Data

no code implementations16 Oct 2023 Clement Fung, Chen Qiu, Aodong Li, Maja Rudolph

In this work, we propose SWSA (Selection With Synthetic Anomalies): a general-purpose framework to select image-based anomaly detectors with a generated synthetic validation set.

Model Selection Unsupervised Anomaly Detection +1

Hybrid Modeling Design Patterns

no code implementations29 Dec 2023 Maja Rudolph, Stefan Kurz, Barbara Rakitsch

In this paper, we provide four base patterns that can serve as blueprints for combining data-driven components with domain knowledge into a hybrid approach.

Towards Fast Stochastic Sampling in Diffusion Generative Models

no code implementations11 Feb 2024 Kushagra Pandey, Maja Rudolph, Stephan Mandt

We propose Splitting Integrators for fast stochastic sampling in pre-trained diffusion models in augmented spaces.

Cannot find the paper you are looking for? You can Submit a new open access paper.