Search Results for author: Gunnar Rätsch

Found 57 papers, 22 papers with code

Multi-Modal Contrastive Learning for Online Clinical Time-Series Applications

no code implementations27 Mar 2024 Fabian Baldenweg, Manuel Burger, Gunnar Rätsch, Rita Kuznetsova

Electronic Health Record (EHR) datasets from Intensive Care Units (ICU) contain a diverse set of data modalities.

Contrastive Learning Time Series

Dynamic Survival Analysis for Early Event Prediction

no code implementations19 Mar 2024 Hugo Yèche, Manuel Burger, Dinara Veshchezerova, Gunnar Rätsch

This study advances Early Event Prediction (EEP) in healthcare through Dynamic Survival Analysis (DSA), offering a novel approach by integrating risk localization into alarm policies to enhance clinical event metrics.

Management Survival Analysis

On the Importance of Step-wise Embeddings for Heterogeneous Clinical Time-Series

no code implementations15 Nov 2023 Rita Kuznetsova, Alizée Pace, Manuel Burger, Hugo Yèche, Gunnar Rätsch

Recent findings in deep learning for tabular data are now surpassing these classical methods by better handling the severe heterogeneity of data input features.

Time Series

Knowledge Graph Representations to enhance Intensive Care Time-Series Predictions

no code implementations13 Nov 2023 Samyak Jain, Manuel Burger, Gunnar Rätsch, Rita Kuznetsova

Intensive Care Units (ICU) require comprehensive patient data integration for enhanced clinical outcome predictions, crucial for assessing patient conditions.

Data Integration Knowledge Graphs +1

Towards Training Without Depth Limits: Batch Normalization Without Gradient Explosion

1 code implementation3 Oct 2023 Alexandru Meterez, Amir Joudaki, Francesco Orabona, Alexander Immer, Gunnar Rätsch, Hadi Daneshmand

We answer this question in the affirmative by giving a particular construction of an Multi-Layer Perceptron (MLP) with linear activations and batch-normalization that provably has bounded gradients at any depth.

Multi-modal Graph Learning over UMLS Knowledge Graphs

1 code implementation10 Jul 2023 Manuel Burger, Gunnar Rätsch, Rita Kuznetsova

The results demonstrate the significance of multi-modal medical concept representations based on prior medical knowledge.

Graph Learning Knowledge Graphs +1

Stochastic Marginal Likelihood Gradients using Neural Tangent Kernels

1 code implementation6 Jun 2023 Alexander Immer, Tycho F. A. van der Ouderaa, Mark van der Wilk, Gunnar Rätsch, Bernhard Schölkopf

Recent works show that Bayesian model selection with Laplace approximations can allow to optimize such hyperparameters just like standard neural network parameters using gradients and on the training data.

Hyperparameter Optimization Model Selection

Delphic Offline Reinforcement Learning under Nonidentifiable Hidden Confounding

no code implementations1 Jun 2023 Alizée Pace, Hugo Yèche, Bernhard Schölkopf, Gunnar Rätsch, Guy Tennenholtz

A prominent challenge of offline reinforcement learning (RL) is the issue of hidden confounding: unobserved variables may influence both the actions taken by the agent and the observed outcomes.

Management Offline RL +2

Improving Neural Additive Models with Bayesian Principles

no code implementations26 May 2023 Kouroche Bouchiat, Alexander Immer, Hugo Yèche, Gunnar Rätsch, Vincent Fortuin

Neural additive models (NAMs) enhance the transparency of deep neural networks by handling input features in separate additive sub-networks.

Additive models Bayesian Inference +1

On the Importance of Clinical Notes in Multi-modal Learning for EHR Data

no code implementations6 Dec 2022 Severin Husmann, Hugo Yèche, Gunnar Rätsch, Rita Kuznetsova

Understanding deep learning model behavior is critical to accepting machine learning-based decision support systems in the medical community.

Descriptive

Temporal Label Smoothing for Early Event Prediction

1 code implementation29 Aug 2022 Hugo Yèche, Alizée Pace, Gunnar Rätsch, Rita Kuznetsova

TLS reduces the number of missed events by up to a factor of two over previously used approaches in early event prediction.

Binary Classification Circulatory Failure +3

Faster One-Sample Stochastic Conditional Gradient Method for Composite Convex Minimization

1 code implementation26 Feb 2022 Gideon Dresdner, Maria-Luiza Vladarean, Gunnar Rätsch, Francesco Locatello, Volkan Cevher, Alp Yurtsever

We propose a stochastic conditional gradient method (CGM) for minimizing convex finite-sum objectives formed as a sum of smooth and non-smooth terms.

Clustering Matrix Completion

Invariance Learning in Deep Neural Networks with Differentiable Laplace Approximations

1 code implementation22 Feb 2022 Alexander Immer, Tycho F. A. van der Ouderaa, Gunnar Rätsch, Vincent Fortuin, Mark van der Wilk

We develop a convenient gradient-based method for selecting the data augmentation without validation data during training of a deep neural network.

Data Augmentation Gaussian Processes +1

HiRID-ICU-Benchmark -- A Comprehensive Machine Learning Benchmark on High-resolution ICU Data

1 code implementation NeurIPS Datasets and Benchmarks 2021 Hugo Yèche, Rita Kuznetsova, Marc Zimmermann, Matthias Hüser, Xinrui Lyu, Martin Faltys, Gunnar Rätsch

The recent success of machine learning methods applied to time series collected from Intensive Care Units (ICU) exposes the lack of standardized machine learning benchmarks for developing and comparing such methods.

BIG-bench Machine Learning Circulatory Failure +7

Neighborhood Contrastive Learning Applied to Online Patient Monitoring

1 code implementation9 Jun 2021 Hugo Yèche, Gideon Dresdner, Francesco Locatello, Matthias Hüser, Gunnar Rätsch

Intensive care units (ICU) are increasingly looking towards machine learning for methods to provide online monitoring of critically ill patients.

BIG-bench Machine Learning Contrastive Learning +3

Boosting Variational Inference With Locally Adaptive Step-Sizes

no code implementations19 May 2021 Gideon Dresdner, Saurav Shekhar, Fabian Pedregosa, Francesco Locatello, Gunnar Rätsch

Variational Inference makes a trade-off between the capacity of the variational family and the tractability of finding an approximate posterior distribution.

Variational Inference

Scalable Marginal Likelihood Estimation for Model Selection in Deep Learning

1 code implementation11 Apr 2021 Alexander Immer, Matthias Bauer, Vincent Fortuin, Gunnar Rätsch, Mohammad Emtiyaz Khan

Marginal-likelihood based model-selection, even though promising, is rarely used in deep learning due to estimation difficulties.

Image Classification Model Selection +2

On Disentanglement in Gaussian Process Variational Autoencoders

no code implementations pproximateinference AABI Symposium 2022 Simon Bing, Vincent Fortuin, Gunnar Rätsch

While many models have been introduced to learn such disentangled representations, only few attempt to explicitly exploit the structure of sequential data.

Disentanglement Time Series +1

WRSE -- a non-parametric weighted-resolution ensemble for predicting individual survival distributions in the ICU

no code implementations2 Nov 2020 Jonathan Heitz, Joanna Ficek, Martin Faltys, Tobias M. Merz, Gunnar Rätsch, Matthias Hüser

Dynamic assessment of mortality risk in the intensive care unit (ICU) can be used to stratify patients, inform about treatment effectiveness or serve as part of an early-warning system.

A Sober Look at the Unsupervised Learning of Disentangled Representations and their Evaluation

no code implementations27 Oct 2020 Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf, Olivier Bachem

The idea behind the \emph{unsupervised} learning of \emph{disentangled} representations is that real-world data is generated by a few explanatory factors of variation which can be recovered by unsupervised learning algorithms.

Disentanglement

Scalable Gaussian Process Variational Autoencoders

1 code implementation26 Oct 2020 Metod Jazbec, Matthew Ashman, Vincent Fortuin, Michael Pearce, Stephan Mandt, Gunnar Rätsch

Conventional variational autoencoders fail in modeling correlations between data points due to their use of factorized priors.

A Commentary on the Unsupervised Learning of Disentangled Representations

no code implementations28 Jul 2020 Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf, Olivier Bachem

The goal of the unsupervised learning of disentangled representations is to separate the independent explanatory factors of variation in the data without access to supervision.

Disentangling Factors of Variations Using Few Labels

no code implementations ICLR Workshop LLD 2019 Francesco Locatello, Michael Tschannen, Stefan Bauer, Gunnar Rätsch, Bernhard Schölkopf, Olivier Bachem

Recently, Locatello et al. (2019) demonstrated that unsupervised disentanglement learning without inductive biases is theoretically impossible and that existing inductive biases and unsupervised methods do not allow to consistently learn disentangled representations.

Disentanglement Model Selection

Weakly-Supervised Disentanglement Without Compromises

3 code implementations ICML 2020 Francesco Locatello, Ben Poole, Gunnar Rätsch, Bernhard Schölkopf, Olivier Bachem, Michael Tschannen

Third, we perform a large-scale empirical study and show that such pairs of observations are sufficient to reliably learn disentangled representations on several benchmark data sets.

Disentanglement Fairness

DPSOM: Deep Probabilistic Clustering with Self-Organizing Maps

2 code implementations3 Oct 2019 Laura Manduchi, Matthias Hüser, Julia Vogt, Gunnar Rätsch, Vincent Fortuin

We show that DPSOM achieves superior clustering performance compared to current deep clustering methods on MNIST/Fashion-MNIST, while maintaining the favourable visualization properties of SOMs.

Clustering Deep Clustering +4

Variational pSOM: Deep Probabilistic Clustering with Self-Organizing Maps

no code implementations25 Sep 2019 Laura Manduchi, Matthias Hüser, Gunnar Rätsch, Vincent Fortuin

There are very performant deep clustering models on the one hand and interpretable representation learning techniques, often relying on latent topological structures such as self-organizing maps, on the other hand.

Clustering Deep Clustering +3

GP-VAE: Deep Probabilistic Time Series Imputation

2 code implementations9 Jul 2019 Vincent Fortuin, Dmitry Baranchuk, Gunnar Rätsch, Stephan Mandt

Multivariate time series with missing values are common in areas such as healthcare and finance, and have grown in number and complexity over the years.

Dimensionality Reduction Multivariate Time Series Imputation +2

Disentangling Factors of Variation Using Few Labels

no code implementations3 May 2019 Francesco Locatello, Michael Tschannen, Stefan Bauer, Gunnar Rätsch, Bernhard Schölkopf, Olivier Bachem

Recently, Locatello et al. (2019) demonstrated that unsupervised disentanglement learning without inductive biases is theoretically impossible and that existing inductive biases and unsupervised methods do not allow to consistently learn disentangled representations.

Disentanglement Model Selection

Unsupervised Extraction of Phenotypes from Cancer Clinical Notes for Association Studies

no code implementations29 Apr 2019 Stefan G. Stark, Stephanie L. Hyland, Melanie F. Pradier, Kjong Lehmann, Andreas Wicki, Fernando Perez Cruz, Julia E. Vogt, Gunnar Rätsch

To demonstrate the utility of our approach, we perform an association study of clinical features with somatic mutation profiles from 4, 007 cancer patients and their tumors.

Clustering

Meta-Learning Mean Functions for Gaussian Processes

no code implementations23 Jan 2019 Vincent Fortuin, Heiko Strathmann, Gunnar Rätsch

When it comes to meta-learning in Gaussian process models, approaches in this setting have mostly focused on learning the kernel function of the prior, but not on learning its mean function.

Gaussian Processes Meta-Learning

Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations

8 code implementations ICML 2019 Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf, Olivier Bachem

The key idea behind the unsupervised learning of disentangled representations is that real-world data is generated by a few explanatory factors of variation which can be recovered by unsupervised learning algorithms.

Disentanglement

Scalable Gaussian Processes on Discrete Domains

no code implementations24 Oct 2018 Vincent Fortuin, Gideon Dresdner, Heiko Strathmann, Gunnar Rätsch

We explore different techniques for selecting inducing points on discrete domains, including greedy selection, determinantal point processes, and simulated annealing.

Gaussian Processes Point Processes

Boosting Black Box Variational Inference

1 code implementation NeurIPS 2018 Francesco Locatello, Gideon Dresdner, Rajiv Khanna, Isabel Valera, Gunnar Rätsch

Finally, we present a stopping criterion drawn from the duality gap in the classic FW analyses and exhaustive experiments to illustrate the usefulness of our theoretical and algorithmic contributions.

Variational Inference

SOM-VAE: Interpretable Discrete Representation Learning on Time Series

6 code implementations ICLR 2019 Vincent Fortuin, Matthias Hüser, Francesco Locatello, Heiko Strathmann, Gunnar Rätsch

We evaluate our model in terms of clustering performance and interpretability on static (Fashion-)MNIST data, a time series of linearly interpolated (Fashion-)MNIST images, a chaotic Lorenz attractor system with two macro states, as well as on a challenging real world medical time series application on the eICU data set.

Clustering Dimensionality Reduction +3

Competitive Training of Mixtures of Independent Deep Generative Models

no code implementations30 Apr 2018 Francesco Locatello, Damien Vincent, Ilya Tolstikhin, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf

A common assumption in causal modeling posits that the data is generated by a set of independent mechanisms, and algorithms should aim to recover this structure.

Clustering

On Matching Pursuit and Coordinate Descent

no code implementations ICML 2018 Francesco Locatello, Anant Raj, Sai Praneeth Karimireddy, Gunnar Rätsch, Bernhard Schölkopf, Sebastian U. Stich, Martin Jaggi

Exploiting the connection between the two algorithms, we present a unified analysis of both, providing affine invariant sublinear $\mathcal{O}(1/t)$ rates on smooth objectives and linear convergence on strongly convex objectives.

Boosting Variational Inference: an Optimization Perspective

no code implementations5 Aug 2017 Francesco Locatello, Rajiv Khanna, Joydeep Ghosh, Gunnar Rätsch

Variational inference is a popular technique to approximate a possibly intractable Bayesian posterior with a more tractable one.

Variational Inference

Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs

6 code implementations ICLR 2018 Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch

We also describe novel evaluation methods for GANs, where we generate a synthetic labelled training dataset, and evaluate on a real test set the performance of a model trained on the synthetic data, and vice-versa.

Time Series Time Series Analysis +1

Greedy Algorithms for Cone Constrained Optimization with Convergence Guarantees

no code implementations NeurIPS 2017 Francesco Locatello, Michael Tschannen, Gunnar Rätsch, Martin Jaggi

Greedy optimization methods such as Matching Pursuit (MP) and Frank-Wolfe (FW) algorithms regained popularity in recent years due to their simplicity, effectiveness and theoretical guarantees.

Learning Unitary Operators with Help From u(n)

1 code implementation17 Jul 2016 Stephanie L. Hyland, Gunnar Rätsch

A major challenge in the training of recurrent neural networks is the so-called vanishing or exploding gradient problem.

A Generative Model of Words and Relationships from Multiple Sources

no code implementations1 Oct 2015 Stephanie L. Hyland, Theofanis Karaletsos, Gunnar Rätsch

We propose a generative model which integrates evidence from diverse data sources, enabling the sharing of semantic information.

Link Prediction

Framework for Multi-task Multiple Kernel Learning and Applications in Genome Analysis

no code implementations30 Jun 2015 Christian Widmer, Marius Kloft, Vipin T Sreedharan, Gunnar Rätsch

We present a general regularization-based framework for Multi-task learning (MTL), in which the similarity between tasks can be learned or refined using $\ell_p$-norm Multiple Kernel learning (MKL).

Multi-Task Learning

Bayesian representation learning with oracle constraints

no code implementations16 Jun 2015 Theofanis Karaletsos, Serge Belongie, Gunnar Rätsch

Representation learning systems typically rely on massive amounts of labeled data in order to be trained to high accuracy.

Metric Learning Representation Learning

Automatic Relevance Determination For Deep Generative Models

no code implementations28 May 2015 Theofanis Karaletsos, Gunnar Rätsch

A recurring problem when building probabilistic latent variable models is regularization and model selection, for instance, the choice of the dimensionality of the latent space.

Model Selection Variational Inference

Probabilistic Clustering of Time-Evolving Distance Data

no code implementations14 Apr 2015 Julia E. Vogt, Marius Kloft, Stefan Stark, Sudhir S. Raman, Sandhya Prabhakaran, Volker Roth, Gunnar Rätsch

We present a novel probabilistic clustering model for objects that are represented via pairwise distances and observed at different time points.

Clustering

Hierarchical Multitask Structured Output Learning for Large-scale Sequence Segmentation

no code implementations NeurIPS 2011 Nico Goernitz, Christian Widmer, Georg Zeller, Andre Kahles, Gunnar Rätsch, Sören Sonnenburg

We present a novel regularization-based Multitask Learning (MTL) formulation for Structured Output (SO) prediction for the case of hierarchical task relations.

Cannot find the paper you are looking for? You can Submit a new open access paper.