1 code implementation • 9 May 2025 • Moritz Vandenhirtz, Julia E. Vogt
Understanding the decision-making process of machine learning models provides valuable insights into the task, the data, and the reasons behind a model's failures.
no code implementations • 24 Mar 2025 • Boqi Chen, Cédric Vincent-Cuaz, Lydia A. Schoenpflug, Manuel Madeira, Lisa Fournier, Vaishnavi Subramanian, Sonali Andani, Samuel Ruiperez-Campillo, Julia E. Vogt, Raphaëlle Luisier, Dorina Thanou, Viktor H. Koelzer, Pascal Frossard, Gabriele Campanella, Gunnar Rätsch
Vision foundation models (FMs) are accelerating the development of digital pathology algorithms and transforming biomedical research.
1 code implementation • 10 Feb 2025 • Alice Bizeul, Thomas Sutter, Alain Ryser, Bernhard Schölkopf, Julius von Kügelgen, Julia E. Vogt
We thus posit that predicting masked from visible components involves more high-level features, allowing our masking strategy to extract more useful representations.
1 code implementation • 15 Nov 2024 • Andrea Agostini, Daphné Chopard, Yang Meng, Norbert Fortin, Babak Shahbaba, Stephan Mandt, Thomas M. Sutter, Julia E. Vogt
Multimodal data integration and label scarcity pose significant challenges for machine learning in medical settings.
1 code implementation • 14 Nov 2024 • Daphné Chopard, Sonia Laguna, Kieran Chin-Cheong, Annika Dietz, Anna Badura, Sven Wellmann, Julia E. Vogt
General movements (GMs) are spontaneous, coordinated body movements in infants that offer valuable insights into the developing nervous system.
no code implementations • 29 Oct 2024 • Patrik Reizinger, Alice Bizeul, Attila Juhos, Julia E. Vogt, Randall Balestriero, Wieland Brendel, David Klindt
Second, we show that on DisLib, a widely-used disentanglement benchmark, simple classification tasks recover latent structures up to linear transformations.
1 code implementation • 24 Oct 2024 • Alba Carballo-Castro, Sonia Laguna, Moritz Vandenhirtz, Julia E. Vogt
Concept-based machine learning methods have increasingly gained importance due to the growing interest in making neural networks interpretable.
1 code implementation • 22 Oct 2024 • Jorge da Silva Goncalves, Laura Manduchi, Moritz Vandenhirtz, Julia E. Vogt
This paper addresses this gap by introducing TreeDiffusion, a deep generative model that conditions Diffusion Models on hierarchical clusters to obtain high-quality, cluster-specific generations.
no code implementations • 10 Oct 2024 • Emanuele Palumbo, Moritz Vandenhirtz, Alain Ryser, Imant Daunhawer, Julia E. Vogt
The structure of many real-world datasets is intrinsically hierarchical, making the modeling of such hierarchies a critical objective in both unsupervised and supervised machine learning.
1 code implementation • 8 Jul 2024 • Jorge da Silva Goncalves, Laura Manduchi, Moritz Vandenhirtz, Julia E. Vogt
This paper introduces Diffuse-TreeVAE, a deep generative model that integrates hierarchical clustering into the framework of Denoising Diffusion Probabilistic Models (DDPMs).
1 code implementation • 27 Jun 2024 • Moritz Vandenhirtz, Sonia Laguna, Ričards Marcinkevičs, Julia E. Vogt
Concept Bottleneck Models (CBMs) have emerged as a promising interpretable method whose final prediction is based on intermediate, human-understandable concepts rather than the raw input.
1 code implementation • 27 Jun 2024 • Moritz Vandenhirtz, Florian Barkmann, Laura Manduchi, Julia E. Vogt, Valentina Boeva
We propose a novel method, scTree, for single-cell Tree Variational Autoencoders, extending a hierarchical clustering approach to single-cell RNA sequencing data.
no code implementations • 29 May 2024 • Alain Ryser, Thomas M. Sutter, Alexander Marx, Julia E. Vogt
At test time, representations of anomalies that do not adhere to the invariances of normal data then deviate from their respective context cluster.
Self-Supervised Anomaly Detection
Self-Supervised Learning
+1
no code implementations • 19 Mar 2024 • Kacper Sokol, Julia E. Vogt
Despite significant progress, evaluation of explainable artificial intelligence remains elusive and challenging.
4 code implementations • 8 Mar 2024 • Thomas M. Sutter, Yang Meng, Andrea Agostini, Daphné Chopard, Norbert Fortin, Julia E. Vogt, Babak Shahbaba, Stephan Mandt
Such architectures impose hard constraints on the model.
1 code implementation • 24 Jan 2024 • Sonia Laguna, Ričards Marcinkevičs, Moritz Vandenhirtz, Julia E. Vogt
Recently, interpretable machine learning has re-explored concept bottleneck models (CBM).
1 code implementation • 24 Jan 2024 • Mike Laszkiewicz, Imant Daunhawer, Julia E. Vogt, Asja Fischer, Johannes Lederer
Recent years have witnessed a rapid development of deep generative models for creating synthetic media, such as images and videos.
1 code implementation • 25 Oct 2023 • Claudio Fanconi, Moritz Vandenhirtz, Severin Husmann, Julia E. Vogt
Prototype learning, a popular machine learning method designed for inherently interpretable decisions, leverages similarities to learned prototypes for classifying new data.
1 code implementation • 16 Oct 2023 • Paweł Czyż, Frederic Grabowski, Julia E. Vogt, Niko Beerenwinkel, Alexander Marx
The pointwise mutual information profile, or simply profile, is the distribution of pointwise mutual information for a given pair of random variables.
1 code implementation • 7 Sep 2023 • Ece Ozkan, Thomas M. Sutter, Yurong Hu, Sebastian Balzer, Julia E. Vogt
Early detection of cardiac dysfunction through routine screening is vital for diagnosing cardiovascular diseases.
2 code implementations • NeurIPS 2023 • Paweł Czyż, Frederic Grabowski, Julia E. Vogt, Niko Beerenwinkel, Alexander Marx
Mutual information is a general statistical dependency measure which has found applications in representation learning, causality, domain generalization and computational biology.
no code implementations • 4 Jun 2023 • Kacper Sokol, Julia E. Vogt
Ante-hoc interpretability has become the holy grail of explainable artificial intelligence for high-stakes domains such as healthcare; however, this notion is elusive, lacks a widely-accepted definition and depends on the operational context.
1 code implementation • 31 May 2023 • Moritz Vandenhirtz, Laura Manduchi, Ričards Marcinkevičs, Julia E. Vogt
We propose Signal is Harder (SiH), a variational-autoencoder-based method that simultaneously trains a biased and unbiased classifier using a novel, disentangling reweighting scheme inspired by the focal loss.
1 code implementation • NeurIPS 2023 • Thomas M. Sutter, Alain Ryser, Joram Liebeskind, Julia E. Vogt
Partitioning a set of elements into an unknown number of mutually exclusive subsets is essential in many machine learning problems.
1 code implementation • 16 Mar 2023 • Imant Daunhawer, Alice Bizeul, Emanuele Palumbo, Alexander Marx, Julia E. Vogt
Our work generalizes previous identifiability results by redefining the generative process in terms of distinct mechanisms with modality-specific latent variables.
1 code implementation • 28 Feb 2023 • Ričards Marcinkevičs, Patricia Reis Wolfertstetter, Ugne Klimiene, Kieran Chin-Cheong, Alyssia Paschke, Julia Zerres, Markus Denzinger, David Niederberger, Sven Wellmann, Ece Ozkan, Christian Knorr, Julia E. Vogt
Appendicitis is among the most frequent reasons for pediatric abdominal surgeries.
no code implementations • 23 Dec 2022 • Ričards Marcinkevičs, Ece Ozkan, Julia E. Vogt
Many modern research fields increasingly rely on collecting and analysing massive, often unstructured, and unwieldy datasets.
1 code implementation • 13 Oct 2022 • Alexander Immer, Christoph Schultheiss, Julia E. Vogt, Bernhard Schölkopf, Peter Bühlmann, Alexander Marx
We study the class of location-scale or heteroscedastic noise models (LSNMs), in which the effect $Y$ can be written as a function of the cause $X$ and a noise source $N$ independent of $X$, which may be scaled by a positive function $g$ over the cause, i. e., $Y = f(X) + g(X)N$.
1 code implementation • 26 Jul 2022 • Ričards Marcinkevičs, Ece Ozkan, Julia E. Vogt
In addition, we compare several intra- and post-processing approaches applied to debiasing deep chest X-ray classifiers.
1 code implementation • 30 Jun 2022 • Alain Ryser, Laura Manduchi, Fabian Laumer, Holger Michel, Sven Wellmann, Julia E. Vogt
The introduced method takes advantage of the periodic nature of the heart cycle to learn three variants of a variational latent trajectory model (TVAE).
no code implementations • 17 Jun 2022 • Yuge Shi, Imant Daunhawer, Julia E. Vogt, Philip H. S. Torr, Amartya Sanyal
As such, there is a lack of insight on the robustness of the representations learned from unsupervised methods, such as self-supervised learning (SSL) and auto-encoder based algorithms (AE), to distribution shift.
1 code implementation • 3 Mar 2022 • Thomas M. Sutter, Laura Manduchi, Alain Ryser, Julia E. Vogt
We introduce reparameterizable gradients to learn the importance between groups and highlight the advantage of explicitly learning the size of subsets in two typical applications: weakly-supervised learning and clustering.
no code implementations • NeurIPS Workshop ICBINB 2021 • Imant Daunhawer, Thomas M. Sutter, Kieran Chin-Cheong, Emanuele Palumbo, Julia E. Vogt
Multimodal variational autoencoders (VAEs) have shown promise as efficient generative models for weakly-supervised data.
1 code implementation • NeurIPS 2021 • Laura Manduchi, Kieran Chin-Cheong, Holger Michel, Sven Wellmann, Julia E. Vogt
Constrained clustering has gained significant attention in the field of machine learning as it can leverage prior information on a growing amount of only partially labeled data.
1 code implementation • ICLR 2022 • Laura Manduchi, Ričards Marcinkevičs, Michela C. Massi, Thomas Weikert, Alexander Sauter, Verena Gotta, Timothy Müller, Flavio Vasella, Marian C. Neidert, Marc Pfister, Bram Stieltjes, Julia E. Vogt
In this work, we study the problem of clustering survival data $-$ a challenging and so far under-explored task.
1 code implementation • ICLR 2021 • Thomas M. Sutter, Imant Daunhawer, Julia E. Vogt
Multiple data types naturally co-occur when describing real-world phenomena and learning from them is a long-standing goal in machine learning research.
1 code implementation • ICLR 2021 • Ričards Marcinkevičs, Julia E. Vogt
Exploratory analysis of time series data can yield a better understanding of complex dynamical systems.
no code implementations • 3 Dec 2020 • Ričards Marcinkevičs, Julia E. Vogt
In this review, we examine the problem of designing interpretable and explainable machine learning models.
Counterfactual Explanation
Explainable artificial intelligence
+1
1 code implementation • NeurIPS 2020 • Thomas M. Sutter, Imant Daunhawer, Julia E. Vogt
Learning from different data types is a long-standing goal in machine learning research, as multiple information sources co-occur when describing natural phenomena.
no code implementations • 5 Jun 2020 • Kieran Chin-Cheong, Thomas Sutter, Julia E. Vogt
In this work, we explore using Generative Adversarial Networks to generate synthetic, heterogeneous EHRs with the goal of using these synthetic records in place of existing data sets for downstream classification tasks.
no code implementations • 29 Apr 2019 • Stefan G. Stark, Stephanie L. Hyland, Melanie F. Pradier, Kjong Lehmann, Andreas Wicki, Fernando Perez Cruz, Julia E. Vogt, Gunnar Rätsch
To demonstrate the utility of our approach, we perform an association study of clinical features with somatic mutation profiles from 4, 007 cancer patients and their tumors.
no code implementations • 14 Apr 2015 • Julia E. Vogt, Marius Kloft, Stefan Stark, Sudhir S. Raman, Sandhya Prabhakaran, Volker Roth, Gunnar Rätsch
We present a novel probabilistic clustering model for objects that are represented via pairwise distances and observed at different time points.