no code implementations • 25 Nov 2024 • Manuel Burger, Fedor Sergeev, Malte Londschien, Daphné Chopard, Hugo Yèche, Eike Gerdes, Polina Leshetkina, Alexander Morgenroth, Zeynep Babür, Jasmina Bogojeska, Martin Faltys, Rita Kuznetsova, Gunnar Rätsch
This work aims to establish a foundation for training large-scale multi-variate time series models on critical care data and to provide a benchmark for machine learning models in transfer learning across hospitals to study and address distribution shift challenges.
no code implementations • 7 Nov 2024 • Boqi Chen, Yuanzhi Zhu, Yunke Ao, Sebastiano Caprara, Reto Sutter, Gunnar Rätsch, Ender Konukoglu, Anna Susmelj
Single-source domain generalization (SDG) aims to learn a model from a single source domain that can generalize well on unseen target domains.
no code implementations • 26 Oct 2024 • Sam Houliston, Alizée Pace, Alexander Immer, Gunnar Rätsch
Aligning Large Language Models (LLMs) to human preferences in content, style, and presentation is challenging, in part because preferences are varied, context-dependent, and sometimes inherently ambiguous.
no code implementations • 18 Jul 2024 • Fedor Sergeev, Paola Malsot, Gunnar Rätsch, Vincent Fortuin
Knowing which features of a multivariate time series to measure and when is a key task in medicine, wearables, and robotics.
no code implementations • 26 Jun 2024 • Alizée Pace, Bernhard Schölkopf, Gunnar Rätsch, Giorgia Ramponi
Drawing on insights from both the offline RL and the preference-based RL literature, our algorithm employs a pessimistic approach for out-of-distribution data, and an optimistic approach for acquiring informative preferences about the optimal policy.
no code implementations • 27 Mar 2024 • Fabian Baldenweg, Manuel Burger, Gunnar Rätsch, Rita Kuznetsova
Electronic Health Record (EHR) datasets from Intensive Care Units (ICU) contain a diverse set of data modalities.
no code implementations • 19 Mar 2024 • Hugo Yèche, Manuel Burger, Dinara Veshchezerova, Gunnar Rätsch
This study advances Early Event Prediction (EEP) in healthcare through Dynamic Survival Analysis (DSA), offering a novel approach by integrating risk localization into alarm policies to enhance clinical event metrics.
1 code implementation • 6 Dec 2023 • Kacper Kapuśniak, Manuel Burger, Gunnar Rätsch, Amir Joudaki
The rapid expansion of genomic sequence data calls for new methods to achieve robust sequence representations.
no code implementations • 15 Nov 2023 • Rita Kuznetsova, Alizée Pace, Manuel Burger, Hugo Yèche, Gunnar Rätsch
Recent findings in deep learning for tabular data are now surpassing these classical methods by better handling the severe heterogeneity of data input features.
no code implementations • 13 Nov 2023 • Samyak Jain, Manuel Burger, Gunnar Rätsch, Rita Kuznetsova
Intensive Care Units (ICU) require comprehensive patient data integration for enhanced clinical outcome predictions, crucial for assessing patient conditions.
1 code implementation • 1 Nov 2023 • Yurong Hu, Manuel Burger, Gunnar Rätsch, Rita Kuznetsova
In research areas with scarce data, representation learning plays a significant role.
1 code implementation • 3 Oct 2023 • Alexandru Meterez, Amir Joudaki, Francesco Orabona, Alexander Immer, Gunnar Rätsch, Hadi Daneshmand
We answer this question in the affirmative by giving a particular construction of an Multi-Layer Perceptron (MLP) with linear activations and batch-normalization that provably has bounded gradients at any depth.
1 code implementation • 10 Jul 2023 • Manuel Burger, Gunnar Rätsch, Rita Kuznetsova
The results demonstrate the significance of multi-modal medical concept representations based on prior medical knowledge.
1 code implementation • 6 Jun 2023 • Alexander Immer, Tycho F. A. van der Ouderaa, Mark van der Wilk, Gunnar Rätsch, Bernhard Schölkopf
Recent works show that Bayesian model selection with Laplace approximations can allow to optimize such hyperparameters just like standard neural network parameters using gradients and on the training data.
no code implementations • 1 Jun 2023 • Alizée Pace, Hugo Yèche, Bernhard Schölkopf, Gunnar Rätsch, Guy Tennenholtz
A prominent challenge of offline reinforcement learning (RL) is the issue of hidden confounding: unobserved variables may influence both the actions taken by the agent and the observed outcomes.
1 code implementation • 26 May 2023 • Kouroche Bouchiat, Alexander Immer, Hugo Yèche, Gunnar Rätsch, Vincent Fortuin
Neural additive models (NAMs) enhance the transparency of deep neural networks by handling input features in separate additive sub-networks.
no code implementations • 6 Dec 2022 • Severin Husmann, Hugo Yèche, Gunnar Rätsch, Rita Kuznetsova
Understanding deep learning model behavior is critical to accepting machine learning-based decision support systems in the medical community.
1 code implementation • 29 Aug 2022 • Hugo Yèche, Alizée Pace, Gunnar Rätsch, Rita Kuznetsova
TLS reduces the number of missed events by up to a factor of two over previously used approaches in early event prediction.
Ranked #1 on Respiratory Failure on HiRID
1 code implementation • 26 Feb 2022 • Gideon Dresdner, Maria-Luiza Vladarean, Gunnar Rätsch, Francesco Locatello, Volkan Cevher, Alp Yurtsever
We propose a stochastic conditional gradient method (CGM) for minimizing convex finite-sum objectives formed as a sum of smooth and non-smooth terms.
1 code implementation • 22 Feb 2022 • Alexander Immer, Tycho F. A. van der Ouderaa, Gunnar Rätsch, Vincent Fortuin, Mark van der Wilk
We develop a convenient gradient-based method for selecting the data augmentation without validation data during training of a deep neural network.
1 code implementation • NeurIPS Datasets and Benchmarks 2021 • Hugo Yèche, Rita Kuznetsova, Marc Zimmermann, Matthias Hüser, Xinrui Lyu, Martin Faltys, Gunnar Rätsch
The recent success of machine learning methods applied to time series collected from Intensive Care Units (ICU) exposes the lack of standardized machine learning benchmarks for developing and comparing such methods.
Ranked #1 on Remaining Length of Stay on HiRID
1 code implementation • 9 Jun 2021 • Hugo Yèche, Gideon Dresdner, Francesco Locatello, Matthias Hüser, Gunnar Rätsch
Intensive care units (ICU) are increasingly looking towards machine learning for methods to provide online monitoring of critically ill patients.
no code implementations • 19 May 2021 • Gideon Dresdner, Saurav Shekhar, Fabian Pedregosa, Francesco Locatello, Gunnar Rätsch
Variational Inference makes a trade-off between the capacity of the variational family and the tractability of finding an approximate posterior distribution.
no code implementations • 12 May 2021 • Matthias Hüser, Martin Faltys, Xinrui Lyu, Chris Barber, Stephanie L. Hyland, Tobias M. Merz, Gunnar Rätsch
The development of respiratory failure is common among patients in intensive care units (ICU).
1 code implementation • 11 Apr 2021 • Alexander Immer, Matthias Bauer, Vincent Fortuin, Gunnar Rätsch, Mohammad Emtiyaz Khan
Marginal-likelihood based model-selection, even though promising, is rarely used in deep learning due to estimation difficulties.
1 code implementation • NeurIPS Workshop ICBINB 2020 • Vincent Fortuin, Adrià Garriga-Alonso, Sebastian W. Ober, Florian Wenzel, Gunnar Rätsch, Richard E. Turner, Mark van der Wilk, Laurence Aitchison
Isotropic Gaussian priors are the de facto standard for modern Bayesian neural network inference.
no code implementations • pproximateinference AABI Symposium 2022 • Simon Bing, Vincent Fortuin, Gunnar Rätsch
While many models have been introduced to learn such disentangled representations, only few attempt to explicitly exploit the structure of sequential data.
no code implementations • 2 Nov 2020 • Jonathan Heitz, Joanna Ficek, Martin Faltys, Tobias M. Merz, Gunnar Rätsch, Matthias Hüser
Dynamic assessment of mortality risk in the intensive care unit (ICU) can be used to stratify patients, inform about treatment effectiveness or serve as part of an early-warning system.
no code implementations • 27 Oct 2020 • Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf, Olivier Bachem
The idea behind the \emph{unsupervised} learning of \emph{disentangled} representations is that real-world data is generated by a few explanatory factors of variation which can be recovered by unsupervised learning algorithms.
1 code implementation • 26 Oct 2020 • Metod Jazbec, Matthew Ashman, Vincent Fortuin, Michael Pearce, Stephan Mandt, Gunnar Rätsch
Conventional variational autoencoders fail in modeling correlations between data points due to their use of factorized priors.
no code implementations • 28 Jul 2020 • Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf, Olivier Bachem
The goal of the unsupervised learning of disentangled representations is to separate the independent explanatory factors of variation in the data without access to supervision.
no code implementations • ICLR Workshop LLD 2019 • Francesco Locatello, Michael Tschannen, Stefan Bauer, Gunnar Rätsch, Bernhard Schölkopf, Olivier Bachem
Recently, Locatello et al. (2019) demonstrated that unsupervised disentanglement learning without inductive biases is theoretically impossible and that existing inductive biases and unsupervised methods do not allow to consistently learn disentangled representations.
3 code implementations • ICML 2020 • Francesco Locatello, Ben Poole, Gunnar Rätsch, Bernhard Schölkopf, Olivier Bachem, Michael Tschannen
Third, we perform a large-scale empirical study and show that such pairs of observations are sufficient to reliably learn disentangled representations on several benchmark data sets.
no code implementations • pproximateinference AABI Symposium 2019 • Vincent Fortuin, Dmitry Baranchuk, Gunnar Rätsch, Stephan Mandt
Multivariate time series with missing values are common in areas such as healthcare and finance, and have grown in number and complexity over the years.
2 code implementations • 3 Oct 2019 • Laura Manduchi, Matthias Hüser, Julia Vogt, Gunnar Rätsch, Vincent Fortuin
We show that DPSOM achieves superior clustering performance compared to current deep clustering methods on MNIST/Fashion-MNIST, while maintaining the favourable visualization properties of SOMs.
1 code implementation • 28 Sep 2019 • Andreas Georgiou, Vincent Fortuin, Harun Mustafa, Gunnar Rätsch
We therefore aim to develop a more memory-efficient technique for taxonomic classification.
no code implementations • 25 Sep 2019 • Laura Manduchi, Matthias Hüser, Gunnar Rätsch, Vincent Fortuin
There are very performant deep clustering models on the one hand and interpretable representation learning techniques, often relying on latent topological structures such as self-organizing maps, on the other hand.
no code implementations • 25 Sep 2019 • Andreas Georgiou, Vincent Fortuin, Harun Mustafa, Gunnar Rätsch
Of particular interest is the determination of the distribution of the taxa of microbes in metagenomic samples.
3 code implementations • 9 Jul 2019 • Vincent Fortuin, Dmitry Baranchuk, Gunnar Rätsch, Stephan Mandt
Multivariate time series with missing values are common in areas such as healthcare and finance, and have grown in number and complexity over the years.
no code implementations • 3 May 2019 • Francesco Locatello, Michael Tschannen, Stefan Bauer, Gunnar Rätsch, Bernhard Schölkopf, Olivier Bachem
Recently, Locatello et al. (2019) demonstrated that unsupervised disentanglement learning without inductive biases is theoretically impossible and that existing inductive biases and unsupervised methods do not allow to consistently learn disentangled representations.
no code implementations • 29 Apr 2019 • Stefan G. Stark, Stephanie L. Hyland, Melanie F. Pradier, Kjong Lehmann, Andreas Wicki, Fernando Perez Cruz, Julia E. Vogt, Gunnar Rätsch
To demonstrate the utility of our approach, we perform an association study of clinical features with somatic mutation profiles from 4, 007 cancer patients and their tumors.
no code implementations • 16 Apr 2019 • Stephanie L. Hyland, Martin Faltys, Matthias Hüser, Xinrui Lyu, Thomas Gumbsch, Cristóbal Esteban, Christian Bock, Max Horn, Michael Moor, Bastian Rieck, Marc Zimmermann, Dean Bodenham, Karsten Borgwardt, Gunnar Rätsch, Tobias M. Merz
Intensive care clinicians are presented with large quantities of patient information and measurements from a multitude of monitoring systems.
no code implementations • 23 Jan 2019 • Vincent Fortuin, Heiko Strathmann, Gunnar Rätsch
When it comes to meta-learning in Gaussian process models, approaches in this setting have mostly focused on learning the kernel function of the prior, but not on learning its mean function.
8 code implementations • ICML 2019 • Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf, Olivier Bachem
The key idea behind the unsupervised learning of disentangled representations is that real-world data is generated by a few explanatory factors of variation which can be recovered by unsupervised learning algorithms.
no code implementations • 24 Oct 2018 • Vincent Fortuin, Gideon Dresdner, Heiko Strathmann, Gunnar Rätsch
We explore different techniques for selecting inducing points on discrete domains, including greedy selection, determinantal point processes, and simulated annealing.
1 code implementation • NeurIPS 2018 • Francesco Locatello, Gideon Dresdner, Rajiv Khanna, Isabel Valera, Gunnar Rätsch
Finally, we present a stopping criterion drawn from the duality gap in the classic FW analyses and exhaustive experiments to illustrate the usefulness of our theoretical and algorithmic contributions.
6 code implementations • ICLR 2019 • Vincent Fortuin, Matthias Hüser, Francesco Locatello, Heiko Strathmann, Gunnar Rätsch
We evaluate our model in terms of clustering performance and interpretability on static (Fashion-)MNIST data, a time series of linearly interpolated (Fashion-)MNIST images, a chaotic Lorenz attractor system with two macro states, as well as on a challenging real world medical time series application on the eICU data set.
no code implementations • 30 Apr 2018 • Francesco Locatello, Damien Vincent, Ilya Tolstikhin, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf
A common assumption in causal modeling posits that the data is generated by a set of independent mechanisms, and algorithms should aim to recover this structure.
no code implementations • ICML 2018 • Francesco Locatello, Anant Raj, Sai Praneeth Karimireddy, Gunnar Rätsch, Bernhard Schölkopf, Sebastian U. Stich, Martin Jaggi
Exploiting the connection between the two algorithms, we present a unified analysis of both, providing affine invariant sublinear $\mathcal{O}(1/t)$ rates on smooth objectives and linear convergence on strongly convex objectives.
no code implementations • 5 Aug 2017 • Francesco Locatello, Rajiv Khanna, Joydeep Ghosh, Gunnar Rätsch
Variational inference is a popular technique to approximate a possibly intractable Bayesian posterior with a more tractable one.
6 code implementations • ICLR 2018 • Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch
We also describe novel evaluation methods for GANs, where we generate a synthetic labelled training dataset, and evaluate on a real test set the performance of a model trained on the synthetic data, and vice-versa.
no code implementations • NeurIPS 2017 • Francesco Locatello, Michael Tschannen, Gunnar Rätsch, Martin Jaggi
Greedy optimization methods such as Matching Pursuit (MP) and Frank-Wolfe (FW) algorithms regained popularity in recent years due to their simplicity, effectiveness and theoretical guarantees.
1 code implementation • 17 Jul 2016 • Stephanie L. Hyland, Gunnar Rätsch
A major challenge in the training of recurrent neural networks is the so-called vanishing or exploding gradient problem.
no code implementations • 10 Feb 2016 • Stephanie L. Hyland, Theofanis Karaletsos, Gunnar Rätsch
Identifying relationships between concepts is a key aspect of scientific knowledge synthesis.
no code implementations • 1 Oct 2015 • Stephanie L. Hyland, Theofanis Karaletsos, Gunnar Rätsch
We propose a generative model which integrates evidence from diverse data sources, enabling the sharing of semantic information.
no code implementations • 30 Jun 2015 • Christian Widmer, Marius Kloft, Vipin T Sreedharan, Gunnar Rätsch
We present a general regularization-based framework for Multi-task learning (MTL), in which the similarity between tasks can be learned or refined using $\ell_p$-norm Multiple Kernel learning (MKL).
no code implementations • 16 Jun 2015 • Theofanis Karaletsos, Serge Belongie, Gunnar Rätsch
Representation learning systems typically rely on massive amounts of labeled data in order to be trained to high accuracy.
no code implementations • 28 May 2015 • Theofanis Karaletsos, Gunnar Rätsch
A recurring problem when building probabilistic latent variable models is regularization and model selection, for instance, the choice of the dimensionality of the latent space.
no code implementations • 14 Apr 2015 • Julia E. Vogt, Marius Kloft, Stefan Stark, Sudhir S. Raman, Sandhya Prabhakaran, Volker Roth, Gunnar Rätsch
We present a novel probabilistic clustering model for objects that are represented via pairwise distances and observed at different time points.
no code implementations • 17 Sep 2013 • Christian Widmer, Philipp Drewe, Xinghua Lou, Shefali Umrania, Stephanie Heinrich, Gunnar Rätsch
Analysis of microscopy images can provide insight into many biological processes.
no code implementations • NeurIPS 2011 • Nico Goernitz, Christian Widmer, Georg Zeller, Andre Kahles, Gunnar Rätsch, Sören Sonnenburg
We present a novel regularization-based Multitask Learning (MTL) formulation for Structured Output (SO) prediction for the case of hierarchical task relations.
no code implementations • NeurIPS 2008 • Gabriele Schweikert, Gunnar Rätsch, Christian Widmer, Bernhard Schölkopf
We study the problem of domain transfer for a supervised classification task in mRNA splicing.