Search Results for author: Stephan Mandt

Found 85 papers, 40 papers with code

Towards Fast Stochastic Sampling in Diffusion Generative Models

no code implementations11 Feb 2024 Kushagra Pandey, Maja Rudolph, Stephan Mandt

We propose Splitting Integrators for fast stochastic sampling in pre-trained diffusion models in augmented spaces.

Precipitation Downscaling with Spatiotemporal Video Diffusion

no code implementations11 Dec 2023 Prakhar Srivastava, Ruihan Yang, Gavin Kerrigan, Gideon Dresdner, Jeremy McGibbon, Christopher Bretherton, Stephan Mandt

In climate science and meteorology, high-resolution local precipitation (rain and snowfall) predictions are limited by the computational costs of simulation-based methods.

Optical Flow Estimation Super-Resolution

Understanding and Visualizing Droplet Distributions in Simulations of Shallow Clouds

no code implementations31 Oct 2023 Justus C. Will, Andrea M. Jenney, Kara D. Lamb, Michael S. Pritchard, Colleen Kaul, Po-Lun Ma, Kyle Pressel, Jacob Shpund, Marcus van Lier-Walqui, Stephan Mandt

Thorough analysis of local droplet-level interactions is crucial to better understand the microphysical processes in clouds and their effect on the global climate.

Efficient Integrators for Diffusion Generative Models

1 code implementation11 Oct 2023 Kushagra Pandey, Maja Rudolph, Stephan Mandt

We propose two complementary frameworks for accelerating sample generation in pre-trained models: Conjugate Integrators and Splitting Integrators.

Understanding Pathologies of Deep Heteroskedastic Regression

no code implementations29 Jun 2023 Eliot Wong-Toi, Alex Boyd, Vincent Fortuin, Stephan Mandt

Deep, overparameterized regression models are notorious for their tendency to overfit.

regression

Computationally-Efficient Neural Image Compression with Shallow Decoders

1 code implementation ICCV 2023 Yibo Yang, Stephan Mandt

We theoretically formalize the intuition behind, and our experimental results establish a new frontier in the trade-off between rate-distortion and decoding complexity for neural image compression.

Image Compression

A Complete Recipe for Diffusion Generative Models

1 code implementation ICCV 2023 Kushagra Pandey, Stephan Mandt

Score-based Generative Models (SGMs) have demonstrated exceptional synthesis outcomes across various tasks.

Image Generation

Deep Anomaly Detection under Labeling Budget Constraints

1 code implementation15 Feb 2023 Aodong Li, Chen Qiu, Marius Kloft, Padhraic Smyth, Stephan Mandt, Maja Rudolph

Selecting informative data points for expert feedback can significantly improve the performance of anomaly detection (AD) in various contexts, such as medical diagnostics or fraud detection.

Anomaly Detection Fraud Detection

Fully Bayesian Autoencoders with Latent Sparse Gaussian Processes

no code implementations9 Feb 2023 Ba-Hien Tran, Babak Shahbaba, Stephan Mandt, Maurizio Filippone

Autoencoders and their variants are among the most widely used models in representation learning and generative modeling.

Gaussian Processes Representation Learning

Probabilistic Querying of Continuous-Time Event Sequences

no code implementations15 Nov 2022 Alex Boyd, Yuxin Chang, Stephan Mandt, Padhraic Smyth

Continuous-time event sequences, i. e., sequences consisting of continuous time stamps and associated event types ("marks"), are an important type of sequential data with many applications, e. g., in clinical medicine or user behavior modeling.

Vocal Bursts Type Prediction

Predictive Querying for Autoregressive Neural Sequence Models

1 code implementation12 Oct 2022 Alex Boyd, Sam Showalter, Stephan Mandt, Padhraic Smyth

In reasoning about sequential events it is natural to pose probabilistic queries such as "when will event A occur next" or "what is the probability of A occurring before B", with applications in areas such as user modeling, medicine, and finance.

Language Modelling

Lossy Image Compression with Conditional Diffusion Models

1 code implementation NeurIPS 2023 Ruihan Yang, Stephan Mandt

This paper outlines an end-to-end optimized lossy image compression framework using diffusion generative models.

Image Compression Image Quality Assessment

Raising the Bar in Graph-level Anomaly Detection

1 code implementation27 May 2022 Chen Qiu, Marius Kloft, Stephan Mandt, Maja Rudolph

Graph-level anomaly detection has become a critical topic in diverse areas, such as financial fraud detection and detecting anomalous activities in social networks.

Anomaly Detection Fraud Detection +1

Diffusion Probabilistic Modeling for Video Generation

1 code implementation16 Mar 2022 Ruihan Yang, Prakhar Srivastava, Stephan Mandt

Denoising diffusion probabilistic models are a promising new class of generative models that mark a milestone in high-quality image generation.

Denoising Image Generation +2

SC2 Benchmark: Supervised Compression for Split Computing

1 code implementation16 Mar 2022 Yoshitomo Matsubara, Ruihan Yang, Marco Levorato, Stephan Mandt

With the increasing demand for deep learning models on mobile devices, splitting neural network computation between the device and a more powerful edge server has become an attractive solution.

Data Compression Edge-computing +2

Hybridizing Physical and Data-driven Prediction Methods for Physicochemical Properties

no code implementations17 Feb 2022 Fabian Jirasek, Robert Bamler, Stephan Mandt

We apply the new approach to predict activity coefficients at infinite dilution and obtain significant improvements compared to the data-driven and physical baselines and established ensemble methods from the machine learning literature.

Bayesian Inference BIG-bench Machine Learning

Latent Outlier Exposure for Anomaly Detection with Contaminated Data

1 code implementation16 Feb 2022 Chen Qiu, Aodong Li, Marius Kloft, Maja Rudolph, Stephan Mandt

We propose a strategy for training an anomaly detector in the presence of unlabeled anomalies that is compatible with a broad class of models.

Anomaly Detection Video Anomaly Detection

An Introduction to Neural Data Compression

2 code implementations14 Feb 2022 Yibo Yang, Stephan Mandt, Lucas Theis

Neural compression is the application of neural networks and other machine learning methods to data compression.

BIG-bench Machine Learning Data Compression +1

Detecting Anomalies within Time Series using Local Neural Transformations

1 code implementation8 Feb 2022 Tim Schneider, Chen Qiu, Marius Kloft, Decky Aspandi Latif, Steffen Staab, Stephan Mandt, Maja Rudolph

We develop a new method to detect anomalies within time series, which is essential in many application domains, reaching from self-driving cars, finance, and marketing to medical diagnosis and epidemiology.

Anomaly Detection Epidemiology +5

Analyzing High-Resolution Clouds and Convection using Multi-Channel VAEs

no code implementations1 Dec 2021 Harshini Mangipudi, Griffin Mooers, Mike Pritchard, Tom Beucler, Stephan Mandt

Understanding the details of small-scale convection and storm formation is crucial to accurately represent the larger-scale planetary dynamics.

Vocal Bursts Intensity Prediction

Lossless Compression with Probabilistic Circuits

1 code implementation ICLR 2022 Anji Liu, Stephan Mandt, Guy Van Den Broeck

To overcome such problems, we establish a new class of tractable lossless compression models that permit efficient encoding and decoding: Probabilistic Circuits (PCs).

Data Compression Image Generation

Towards Empirical Sandwich Bounds on the Rate-Distortion Function

2 code implementations ICLR 2022 Yibo Yang, Stephan Mandt

By contrast, this paper makes the first attempt at an algorithm for sandwiching the R-D function of a general (not necessarily discrete) source requiring only i. i. d.

Data Compression Image Compression

Supervised Compression for Resource-Constrained Edge Computing Systems

2 code implementations21 Aug 2021 Yoshitomo Matsubara, Ruihan Yang, Marco Levorato, Stephan Mandt

There has been much interest in deploying deep learning algorithms on low-powered devices, including smartphones, drones, and medical sensors.

Data Compression Edge-computing +2

Insights from Generative Modeling for Neural Video Compression

1 code implementation28 Jul 2021 Ruihan Yang, Yibo Yang, Joseph Marino, Stephan Mandt

While recent machine learning research has revealed connections between deep generative models such as VAEs and rate-distortion losses used in learned compression, most of this work has focused on images.

Video Compression

Structured Stochastic Gradient MCMC

1 code implementation19 Jul 2021 Antonios Alexos, Alex Boyd, Stephan Mandt

Since practitioners face speed versus accuracy tradeoffs in these models, variational inference (VI) is often the preferable option.

Bayesian Inference Variational Inference

Neural Transformation Learning for Deep Anomaly Detection Beyond Images

3 code implementations30 Mar 2021 Chen Qiu, Timo Pfrommer, Marius Kloft, Stephan Mandt, Maja Rudolph

Data transformations (e. g. rotations, reflections, and cropping) play an important role in self-supervised learning.

Anomaly Detection Self-Supervised Learning +2

SCALE SPACE FLOW WITH AUTOREGRESSIVE PRIORS

no code implementations ICLR Workshop Neural_Compression 2021 Ruihan Yang, Yibo Yang, Joseph Marino, Stephan Mandt

There has been a recent surge of interest in neural video compression models that combines data-driven dimensionality reduction with learned entropy coding.

Dimensionality Reduction Open-Ended Question Answering +1

Lower Bounding Rate-Distortion From Samples

no code implementations ICLR Workshop Neural_Compression 2021 Yibo Yang, Stephan Mandt

The rate-distortion function tells us the minimal number of bits on average to compress a random object within a given distortion tolerance.

Stochastic Optimization

Detecting and Adapting to Irregular Distribution Shifts in Bayesian Online Learning

1 code implementation NeurIPS 2021 Aodong Li, Alex Boyd, Padhraic Smyth, Stephan Mandt

We consider the problem of online learning in the presence of distribution shifts that occur at an unknown rate and of unknown intensity.

Autonomous Navigation Change Point Detection

Variational Beam Search for Novelty Detection

no code implementations pproximateinference AABI Symposium 2021 Aodong Li, Alex James Boyd, Padhraic Smyth, Stephan Mandt

We consider the problem of online learning in the presence of sudden distribution shifts, which may be hard to detect and can lead to a slow but steady degradation in model performance.

Novelty Detection

Generative Video Compression as Hierarchical Variational Inference

no code implementations pproximateinference AABI Symposium 2021 Ruihan Yang, Yibo Yang, Joseph Marino, Stephan Mandt

Recent work by Marino et al. (2020) showed improved performance in sequential density estimation by combining masked autoregressive flows with hierarchical latent variable models.

Density Estimation Variational Inference +1

User-Dependent Neural Sequence Models for Continuous-Time Event Data

1 code implementation NeurIPS 2020 Alex Boyd, Robert Bamler, Stephan Mandt, Padhraic Smyth

Modeling such data can be very challenging, in particular for applications with many different types of events, since it requires a model to predict the event types as well as the time of occurrence.

Variational Inference

Scalable Gaussian Process Variational Autoencoders

1 code implementation26 Oct 2020 Metod Jazbec, Matthew Ashman, Vincent Fortuin, Michael Pearce, Stephan Mandt, Gunnar Rätsch

Conventional variational autoencoders fail in modeling correlations between data points due to their use of factorized priors.

Variational Dynamic Mixtures

no code implementations20 Oct 2020 Chen Qiu, Stephan Mandt, Maja Rudolph

Deep probabilistic time series forecasting models have become an integral part of machine learning.

Probabilistic Time Series Forecasting Time Series

Hierarchical Autoregressive Modeling for Neural Video Compression

3 code implementations ICLR 2021 Ruihan Yang, Yibo Yang, Joseph Marino, Stephan Mandt

Recent work by Marino et al. (2020) showed improved performance in sequential density estimation by combining masked autoregressive flows with hierarchical latent variable models.

Density Estimation Video Compression

Improving Sequential Latent Variable Models with Autoregressive Flows

no code implementations7 Oct 2020 Joseph Marino, Lei Chen, JiaWei He, Stephan Mandt

We propose an approach for improving sequence modeling based on autoregressive normalizing flows.

Generative Modeling for Atmospheric Convection

no code implementations3 Jul 2020 Griffin Mooers, Jens Tuyls, Stephan Mandt, Michael Pritchard, Tom Beucler

While cloud-resolving models can explicitly simulate the details of small-scale storm formation and morphology, these details are often ignored by climate models for lack of computational resources.

Clustering Dimensionality Reduction +1

Variational Bayesian Quantization

2 code implementations ICML 2020 Yibo Yang, Robert Bamler, Stephan Mandt

Our experimental results demonstrate the importance of taking into account posterior uncertainties, and show that image compression with the proposed algorithm outperforms JPEG over a wide range of bit rates using only a single standard VAE.

Image Compression Model Compression +2

Extreme Classification via Adversarial Softmax Approximation

1 code implementation ICLR 2020 Robert Bamler, Stephan Mandt

Training a classifier over a large number of classes, known as 'extreme classification', has become a topic of major interest with applications in technology, science, and e-commerce.

Classification General Classification

How Good is the Bayes Posterior in Deep Neural Networks Really?

1 code implementation ICML 2020 Florian Wenzel, Kevin Roth, Bastiaan S. Veeling, Jakub Świątkowski, Linh Tran, Stephan Mandt, Jasper Snoek, Tim Salimans, Rodolphe Jenatton, Sebastian Nowozin

In this work we cast doubt on the current understanding of Bayes posteriors in popular deep neural networks: we demonstrate through careful MCMC sampling that the posterior predictive induced by the Bayes posterior yields systematically worse predictions compared to simpler methods including point estimates obtained from SGD.

Bayesian Inference Uncertainty Quantification

Machine Learning in Thermodynamics: Prediction of Activity Coefficients by Matrix Completion

no code implementations29 Jan 2020 Fabian Jirasek, Rodrigo A. S. Alves, Julie Damay, Robert A. Vandermeulen, Robert Bamler, Michael Bortz, Stephan Mandt, Marius Kloft, Hans Hasse

Activity coefficients, which are a measure of the non-ideality of liquid mixtures, are a key property in chemical engineering with relevance to modeling chemical and phase equilibria as well as transport processes.

BIG-bench Machine Learning Matrix Completion

Autoregressive Text Generation Beyond Feedback Loops

1 code implementation IJCNLP 2019 Florian Schmidt, Stephan Mandt, Thomas Hofmann

Autoregressive state transitions, where predictions are conditioned on past predictions, are the predominant choice for both deterministic and stochastic sequential models.

Sentence Text Generation

GP-VAE: Deep Probabilistic Time Series Imputation

2 code implementations9 Jul 2019 Vincent Fortuin, Dmitry Baranchuk, Gunnar Rätsch, Stephan Mandt

Multivariate time series with missing values are common in areas such as healthcare and finance, and have grown in number and complexity over the years.

Dimensionality Reduction Multivariate Time Series Imputation +2

A Quantum Field Theory of Representation Learning

no code implementations4 Jul 2019 Robert Bamler, Stephan Mandt

Continuous symmetries and their breaking play a prominent role in contemporary physics.

Representation Learning Time Series +1

Augmenting and Tuning Knowledge Graph Embeddings

1 code implementation1 Jul 2019 Robert Bamler, Farnood Salehi, Stephan Mandt

Knowledge graph embeddings rank among the most successful methods for link prediction in knowledge graphs, i. e., the task of completing an incomplete collection of relational facts.

Knowledge Graph Embeddings Knowledge Graphs +1

Deep Generative Video Compression

no code implementations NeurIPS 2019 Jun Han, Salvator Lombardo, Christopher Schroers, Stephan Mandt

The usage of deep generative models for image compression has led to impressive performance gains over classical codecs while neural video compression is still in its infancy.

Image Compression Temporal Sequences +1

Probabilistic Knowledge Graph Embeddings

no code implementations27 Sep 2018 Farnood Salehi, Robert Bamler, Stephan Mandt

We develop a probabilistic extension of state-of-the-art embedding models for link prediction in relational knowledge graphs.

Knowledge Graph Embeddings Knowledge Graphs +2

Iterative Amortized Inference

1 code implementation ICML 2018 Joseph Marino, Yisong Yue, Stephan Mandt

The failure of these models to reach fully optimized approximate posterior estimates results in an amortization gap.

Inference Optimization Variational Inference

Quasi-Monte Carlo Variational Inference

no code implementations ICML 2018 Alexander Buchholz, Florian Wenzel, Stephan Mandt

We also propose a new algorithm for Monte Carlo objectives, where we operate with a constant learning rate and increase the number of QMC samples per iteration.

Variational Inference

Improving Optimization in Models With Continuous Symmetry Breaking

no code implementations ICML 2018 Robert Bamler, Stephan Mandt

We show that representation learning models for time series possess an approximate continuous symmetry that leads to slow convergence of gradient descent.

Representation Learning Time Series +2

Active Mini-Batch Sampling using Repulsive Point Processes

1 code implementation8 Apr 2018 Cheng Zhang, Cengiz Öztireli, Stephan Mandt, Giampiero Salvi

We first show that the phenomenon of variance reduction by diversified sampling generalizes in particular to non-stationary point processes.

Point Processes

Scalable Generalized Dynamic Topic Models

1 code implementation21 Mar 2018 Patrick Jähnichen, Florian Wenzel, Marius Kloft, Stephan Mandt

First, we extend the class of tractable priors from Wiener processes to the generic class of Gaussian processes (GPs).

Event Detection Gaussian Processes +2

Improving Optimization for Models With Continuous Symmetry Breaking

no code implementations8 Mar 2018 Robert Bamler, Stephan Mandt

We show that representation learning models for time series possess an approximate continuous symmetry that leads to slow convergence of gradient descent.

Representation Learning Time Series +2

Disentangled Sequential Autoencoder

3 code implementations ICML 2018 Yingzhen Li, Stephan Mandt

This architecture gives us partial control over generating content and dynamics by conditioning on either one of these sets of features.

Video Compression

Learning to Infer

no code implementations ICLR 2018 Joseph Marino, Yisong Yue, Stephan Mandt

Inference models, which replace an optimization-based inference procedure with a learned model, have been fundamental in advancing Bayesian deep learning, the most notable example being variational auto-encoders (VAEs).

Inference Optimization

Anomaly Detection with Generative Adversarial Networks

no code implementations ICLR 2018 Lucas Deecke, Robert Vandermeulen, Lukas Ruff, Stephan Mandt, Marius Kloft

Many anomaly detection methods exist that perform well on low-dimensional problems however there is a notable lack of effective methods for high-dimensional spaces, such as images.

Anomaly Detection

Advances in Variational Inference

no code implementations15 Nov 2017 Cheng Zhang, Judith Butepage, Hedvig Kjellstrom, Stephan Mandt

Many modern unsupervised or semi-supervised machine learning algorithms rely on Bayesian probabilistic models.

Variational Inference

Bayesian Paragraph Vectors

no code implementations10 Nov 2017 Geng Ji, Robert Bamler, Erik B. Sudderth, Stephan Mandt

Word2vec (Mikolov et al., 2013) has proven to be successful in natural language processing by capturing the semantic relationships between different words.

Sentiment Analysis Word Embeddings

Perturbative Black Box Variational Inference

no code implementations NeurIPS 2017 Robert Bamler, Cheng Zhang, Manfred Opper, Stephan Mandt

Black box variational inference (BBVI) with reparameterization gradients triggered the exploration of divergence measures other than the Kullback-Leibler (KL) divergence, such as alpha divergences.

Gaussian Processes Variational Inference

Structured Black Box Variational Inference for Latent Time Series Models

no code implementations4 Jul 2017 Robert Bamler, Stephan Mandt

Black box variational inference with reparameterization gradients (BBVI) allows us to explore a rich new class of Bayesian non-conjugate latent time series models; however, a naive application of BBVI to such structured variational models would scale quadratically in the number of time steps.

Collaborative Filtering Time Series +3

Factorized Variational Autoencoders for Modeling Audience Reactions to Movies

no code implementations CVPR 2017 Zhiwei Deng, Rajitha Navarathna, Peter Carr, Stephan Mandt, Yisong Yue, Iain Matthews, Greg Mori

Matrix and tensor factorization methods are often used for finding underlying low-dimensional patterns from noisy data.

Determinantal Point Processes for Mini-Batch Diversification

no code implementations1 May 2017 Cheng Zhang, Hedvig Kjellstrom, Stephan Mandt

The DPP relies on a similarity measure between data points and gives low probabilities to mini-batches which contain redundant data, and higher probabilities to mini-batches with more diverse data.

Point Processes

Stochastic Gradient Descent as Approximate Bayesian Inference

1 code implementation13 Apr 2017 Stephan Mandt, Matthew D. Hoffman, David M. Blei

Specifically, we show how to adjust the tuning parameters of constant SGD to best match the stationary distribution to a posterior, minimizing the Kullback-Leibler divergence between these two distributions.

Bayesian Inference

Dynamic Word Embeddings

1 code implementation ICML 2017 Robert Bamler, Stephan Mandt

We present a probabilistic language model for time-stamped text data which tracks the semantic evolution of individual words over time.

Language Modelling Variational Inference +1

Exponential Family Embeddings

no code implementations NeurIPS 2016 Maja R. Rudolph, Francisco J. R. Ruiz, Stephan Mandt, David M. Blei

In this paper, we develop exponential family embeddings, a class of methods that extends the idea of word embeddings to other types of high-dimensional data.

Dimensionality Reduction Movie Recommendation +3

A Variational Analysis of Stochastic Gradient Algorithms

no code implementations8 Feb 2016 Stephan Mandt, Matthew D. Hoffman, David M. Blei

With constant learning rates, it is a stochastic process that, after an initial phase of convergence, generates samples from a stationary distribution.

Variational Inference

Sparse Probit Linear Mixed Model

no code implementations16 Jul 2015 Stephan Mandt, Florian Wenzel, Shinichi Nakajima, John P. Cunningham, Christoph Lippert, Marius Kloft

Formulated as models for linear regression, LMMs have been restricted to continuous phenotypes.

feature selection

Variational Tempering

no code implementations7 Nov 2014 Stephan Mandt, James McInerney, Farhan Abrol, Rajesh Ranganath, David Blei

Lastly, we develop local variational tempering, which assigns a latent temperature to each data point; this allows for dynamic annealing that varies across data.

Variational Inference

Smoothed Gradients for Stochastic Variational Inference

no code implementations NeurIPS 2014 Stephan Mandt, David Blei

It uses stochastic optimization to fit a variational distribution, following easy-to-compute noisy natural gradients.

Stochastic Optimization Variational Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.