Search Results for author: Gunnar Rätsch

Found 44 papers, 15 papers with code

Faster One-Sample Stochastic Conditional Gradient Method for Composite Convex Minimization

1 code implementation26 Feb 2022 Gideon Dresdner, Maria-Luiza Vladarean, Gunnar Rätsch, Francesco Locatello, Volkan Cevher, Alp Yurtsever

We propose a stochastic conditional gradient method (CGM) for minimizing convex finite-sum objectives formed as a sum of smooth and non-smooth terms.

Matrix Completion

Invariance Learning in Deep Neural Networks with Differentiable Laplace Approximations

no code implementations22 Feb 2022 Alexander Immer, Tycho F. A. van der Ouderaa, Vincent Fortuin, Gunnar Rätsch, Mark van der Wilk

Data augmentation is commonly applied to improve performance of deep learning by enforcing the knowledge that certain transformations on the input preserve the output.

Data Augmentation Gaussian Processes +1

HiRID-ICU-Benchmark -- A Comprehensive Machine Learning Benchmark on High-resolution ICU Data

1 code implementation NeurIPS Datasets and Benchmarks 2021 Hugo Yèche, Rita Kuznetsova, Marc Zimmermann, Matthias Hüser, Xinrui Lyu, Martin Faltys, Gunnar Rätsch

The recent success of machine learning methods applied to time series collected from Intensive Care Units (ICU) exposes the lack of standardized machine learning benchmarks for developing and comparing such methods.

Circulatory Failure ICU Mortality +5

Neighborhood Contrastive Learning Applied to Online Patient Monitoring

1 code implementation9 Jun 2021 Hugo Yèche, Gideon Dresdner, Francesco Locatello, Matthias Hüser, Gunnar Rätsch

Intensive care units (ICU) are increasingly looking towards machine learning for methods to provide online monitoring of critically ill patients.

Contrastive Learning Data Augmentation +1

Boosting Variational Inference With Locally Adaptive Step-Sizes

no code implementations19 May 2021 Gideon Dresdner, Saurav Shekhar, Fabian Pedregosa, Francesco Locatello, Gunnar Rätsch

Variational Inference makes a trade-off between the capacity of the variational family and the tractability of finding an approximate posterior distribution.

Variational Inference

Scalable Marginal Likelihood Estimation for Model Selection in Deep Learning

1 code implementation11 Apr 2021 Alexander Immer, Matthias Bauer, Vincent Fortuin, Gunnar Rätsch, Mohammad Emtiyaz Khan

Marginal-likelihood based model-selection, even though promising, is rarely used in deep learning due to estimation difficulties.

Image Classification Model Selection +1

On Disentanglement in Gaussian Process Variational Autoencoders

no code implementations pproximateinference AABI Symposium 2022 Simon Bing, Vincent Fortuin, Gunnar Rätsch

While many models have been introduced to learn such disentangled representations, only few attempt to explicitly exploit the structure of sequential data.

Disentanglement Time Series

WRSE -- a non-parametric weighted-resolution ensemble for predicting individual survival distributions in the ICU

no code implementations2 Nov 2020 Jonathan Heitz, Joanna Ficek, Martin Faltys, Tobias M. Merz, Gunnar Rätsch, Matthias Hüser

Dynamic assessment of mortality risk in the intensive care unit (ICU) can be used to stratify patients, inform about treatment effectiveness or serve as part of an early-warning system.

A Sober Look at the Unsupervised Learning of Disentangled Representations and their Evaluation

no code implementations27 Oct 2020 Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf, Olivier Bachem

The idea behind the \emph{unsupervised} learning of \emph{disentangled} representations is that real-world data is generated by a few explanatory factors of variation which can be recovered by unsupervised learning algorithms.

Disentanglement

Scalable Gaussian Process Variational Autoencoders

1 code implementation26 Oct 2020 Metod Jazbec, Matthew Ashman, Vincent Fortuin, Michael Pearce, Stephan Mandt, Gunnar Rätsch

Conventional variational autoencoders fail in modeling correlations between data points due to their use of factorized priors.

A Commentary on the Unsupervised Learning of Disentangled Representations

no code implementations28 Jul 2020 Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf, Olivier Bachem

The goal of the unsupervised learning of disentangled representations is to separate the independent explanatory factors of variation in the data without access to supervision.

Disentangling Factors of Variations Using Few Labels

no code implementations ICLR Workshop LLD 2019 Francesco Locatello, Michael Tschannen, Stefan Bauer, Gunnar Rätsch, Bernhard Schölkopf, Olivier Bachem

Recently, Locatello et al. (2019) demonstrated that unsupervised disentanglement learning without inductive biases is theoretically impossible and that existing inductive biases and unsupervised methods do not allow to consistently learn disentangled representations.

Disentanglement Model Selection

Weakly-Supervised Disentanglement Without Compromises

2 code implementations ICML 2020 Francesco Locatello, Ben Poole, Gunnar Rätsch, Bernhard Schölkopf, Olivier Bachem, Michael Tschannen

Third, we perform a large-scale empirical study and show that such pairs of observations are sufficient to reliably learn disentangled representations on several benchmark data sets.

Disentanglement Fairness

DPSOM: Deep Probabilistic Clustering with Self-Organizing Maps

2 code implementations3 Oct 2019 Laura Manduchi, Matthias Hüser, Julia Vogt, Gunnar Rätsch, Vincent Fortuin

We show that DPSOM achieves superior clustering performance compared to current deep clustering methods on MNIST/Fashion-MNIST, while maintaining the favourable visualization properties of SOMs.

Deep Clustering Representation Learning +3

Variational pSOM: Deep Probabilistic Clustering with Self-Organizing Maps

no code implementations25 Sep 2019 Laura Manduchi, Matthias Hüser, Gunnar Rätsch, Vincent Fortuin

There are very performant deep clustering models on the one hand and interpretable representation learning techniques, often relying on latent topological structures such as self-organizing maps, on the other hand.

Deep Clustering Representation Learning +1

GP-VAE: Deep Probabilistic Time Series Imputation

1 code implementation9 Jul 2019 Vincent Fortuin, Dmitry Baranchuk, Gunnar Rätsch, Stephan Mandt

Multivariate time series with missing values are common in areas such as healthcare and finance, and have grown in number and complexity over the years.

Dimensionality Reduction Multivariate Time Series Imputation +1

Disentangling Factors of Variation Using Few Labels

no code implementations3 May 2019 Francesco Locatello, Michael Tschannen, Stefan Bauer, Gunnar Rätsch, Bernhard Schölkopf, Olivier Bachem

Recently, Locatello et al. (2019) demonstrated that unsupervised disentanglement learning without inductive biases is theoretically impossible and that existing inductive biases and unsupervised methods do not allow to consistently learn disentangled representations.

Disentanglement Model Selection

Unsupervised Extraction of Phenotypes from Cancer Clinical Notes for Association Studies

no code implementations29 Apr 2019 Stefan G. Stark, Stephanie L. Hyland, Melanie F. Pradier, Kjong Lehmann, Andreas Wicki, Fernando Perez Cruz, Julia E. Vogt, Gunnar Rätsch

To demonstrate the utility of our approach, we perform an association study of clinical features with somatic mutation profiles from 4, 007 cancer patients and their tumors.

Meta-Learning Mean Functions for Gaussian Processes

no code implementations23 Jan 2019 Vincent Fortuin, Heiko Strathmann, Gunnar Rätsch

When it comes to meta-learning in Gaussian process models, approaches in this setting have mostly focused on learning the kernel function of the prior, but not on learning its mean function.

Gaussian Processes Meta-Learning

Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations

6 code implementations ICML 2019 Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf, Olivier Bachem

The key idea behind the unsupervised learning of disentangled representations is that real-world data is generated by a few explanatory factors of variation which can be recovered by unsupervised learning algorithms.

Disentanglement

Scalable Gaussian Processes on Discrete Domains

no code implementations24 Oct 2018 Vincent Fortuin, Gideon Dresdner, Heiko Strathmann, Gunnar Rätsch

We explore different techniques for selecting inducing points on discrete domains, including greedy selection, determinantal point processes, and simulated annealing.

Gaussian Processes Point Processes

Boosting Black Box Variational Inference

1 code implementation NeurIPS 2018 Francesco Locatello, Gideon Dresdner, Rajiv Khanna, Isabel Valera, Gunnar Rätsch

Finally, we present a stopping criterion drawn from the duality gap in the classic FW analyses and exhaustive experiments to illustrate the usefulness of our theoretical and algorithmic contributions.

Variational Inference

SOM-VAE: Interpretable Discrete Representation Learning on Time Series

6 code implementations ICLR 2019 Vincent Fortuin, Matthias Hüser, Francesco Locatello, Heiko Strathmann, Gunnar Rätsch

We evaluate our model in terms of clustering performance and interpretability on static (Fashion-)MNIST data, a time series of linearly interpolated (Fashion-)MNIST images, a chaotic Lorenz attractor system with two macro states, as well as on a challenging real world medical time series application on the eICU data set.

Dimensionality Reduction Representation Learning +2

Competitive Training of Mixtures of Independent Deep Generative Models

no code implementations30 Apr 2018 Francesco Locatello, Damien Vincent, Ilya Tolstikhin, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf

A common assumption in causal modeling posits that the data is generated by a set of independent mechanisms, and algorithms should aim to recover this structure.

On Matching Pursuit and Coordinate Descent

no code implementations ICML 2018 Francesco Locatello, Anant Raj, Sai Praneeth Karimireddy, Gunnar Rätsch, Bernhard Schölkopf, Sebastian U. Stich, Martin Jaggi

Exploiting the connection between the two algorithms, we present a unified analysis of both, providing affine invariant sublinear $\mathcal{O}(1/t)$ rates on smooth objectives and linear convergence on strongly convex objectives.

Boosting Variational Inference: an Optimization Perspective

no code implementations5 Aug 2017 Francesco Locatello, Rajiv Khanna, Joydeep Ghosh, Gunnar Rätsch

Variational inference is a popular technique to approximate a possibly intractable Bayesian posterior with a more tractable one.

Variational Inference

Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs

6 code implementations ICLR 2018 Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch

We also describe novel evaluation methods for GANs, where we generate a synthetic labelled training dataset, and evaluate on a real test set the performance of a model trained on the synthetic data, and vice-versa.

Time Series

Greedy Algorithms for Cone Constrained Optimization with Convergence Guarantees

no code implementations NeurIPS 2017 Francesco Locatello, Michael Tschannen, Gunnar Rätsch, Martin Jaggi

Greedy optimization methods such as Matching Pursuit (MP) and Frank-Wolfe (FW) algorithms regained popularity in recent years due to their simplicity, effectiveness and theoretical guarantees.

Learning Unitary Operators with Help From u(n)

1 code implementation17 Jul 2016 Stephanie L. Hyland, Gunnar Rätsch

A major challenge in the training of recurrent neural networks is the so-called vanishing or exploding gradient problem.

A Generative Model of Words and Relationships from Multiple Sources

no code implementations1 Oct 2015 Stephanie L. Hyland, Theofanis Karaletsos, Gunnar Rätsch

We propose a generative model which integrates evidence from diverse data sources, enabling the sharing of semantic information.

Link Prediction

Framework for Multi-task Multiple Kernel Learning and Applications in Genome Analysis

no code implementations30 Jun 2015 Christian Widmer, Marius Kloft, Vipin T Sreedharan, Gunnar Rätsch

We present a general regularization-based framework for Multi-task learning (MTL), in which the similarity between tasks can be learned or refined using $\ell_p$-norm Multiple Kernel learning (MKL).

Multi-Task Learning

Bayesian representation learning with oracle constraints

no code implementations16 Jun 2015 Theofanis Karaletsos, Serge Belongie, Gunnar Rätsch

Representation learning systems typically rely on massive amounts of labeled data in order to be trained to high accuracy.

Metric Learning Representation Learning

Automatic Relevance Determination For Deep Generative Models

no code implementations28 May 2015 Theofanis Karaletsos, Gunnar Rätsch

A recurring problem when building probabilistic latent variable models is regularization and model selection, for instance, the choice of the dimensionality of the latent space.

Model Selection Variational Inference

Probabilistic Clustering of Time-Evolving Distance Data

no code implementations14 Apr 2015 Julia E. Vogt, Marius Kloft, Stefan Stark, Sudhir S. Raman, Sandhya Prabhakaran, Volker Roth, Gunnar Rätsch

We present a novel probabilistic clustering model for objects that are represented via pairwise distances and observed at different time points.

Hierarchical Multitask Structured Output Learning for Large-scale Sequence Segmentation

no code implementations NeurIPS 2011 Nico Goernitz, Christian Widmer, Georg Zeller, Andre Kahles, Gunnar Rätsch, Sören Sonnenburg

We present a novel regularization-based Multitask Learning (MTL) formulation for Structured Output (SO) prediction for the case of hierarchical task relations.

Cannot find the paper you are looking for? You can Submit a new open access paper.