Search Results for author: Julia E. Vogt

Found 42 papers, 29 papers with code

From Pixels to Perception: Interpretable Predictions via Instance-wise Grouped Feature Selection

1 code implementation9 May 2025 Moritz Vandenhirtz, Julia E. Vogt

Understanding the decision-making process of machine learning models provides valuable insights into the task, the data, and the reasons behind a model's failures.

Decision Making feature selection

From Pixels to Components: Eigenvector Masking for Visual Representation Learning

1 code implementation10 Feb 2025 Alice Bizeul, Thomas Sutter, Alain Ryser, Bernhard Schölkopf, Julius von Kügelgen, Julia E. Vogt

We thus posit that predicting masked from visible components involves more high-level features, allowing our masking strategy to extract more useful representations.

image-classification Image Classification +1

Automatic Classification of General Movements in Newborns

1 code implementation14 Nov 2024 Daphné Chopard, Sonia Laguna, Kieran Chin-Cheong, Annika Dietz, Anna Badura, Sven Wellmann, Julia E. Vogt

General movements (GMs) are spontaneous, coordinated body movements in infants that offer valuable insights into the developing nervous system.

Classification

Cross-Entropy Is All You Need To Invert the Data Generating Process

no code implementations29 Oct 2024 Patrik Reizinger, Alice Bizeul, Attila Juhos, Julia E. Vogt, Randall Balestriero, Wieland Brendel, David Klindt

Second, we show that on DisLib, a widely-used disentanglement benchmark, simple classification tasks recover latent structures up to linear transformations.

All Disentanglement +1

Exploiting Interpretable Capabilities with Concept-Enhanced Diffusion and Prototype Networks

1 code implementation24 Oct 2024 Alba Carballo-Castro, Sonia Laguna, Moritz Vandenhirtz, Julia E. Vogt

Concept-based machine learning methods have increasingly gained importance due to the growing interest in making neural networks interpretable.

Hierarchical Clustering for Conditional Diffusion in Image Generation

1 code implementation22 Oct 2024 Jorge da Silva Goncalves, Laura Manduchi, Moritz Vandenhirtz, Julia E. Vogt

This paper addresses this gap by introducing TreeDiffusion, a deep generative model that conditions Diffusion Models on hierarchical clusters to obtain high-quality, cluster-specific generations.

Clustering Image Generation

From Logits to Hierarchies: Hierarchical Clustering made Simple

no code implementations10 Oct 2024 Emanuele Palumbo, Moritz Vandenhirtz, Alain Ryser, Imant Daunhawer, Julia E. Vogt

The structure of many real-world datasets is intrinsically hierarchical, making the modeling of such hierarchies a critical objective in both unsupervised and supervised machine learning.

Clustering

Structured Generations: Using Hierarchical Clusters to guide Diffusion Models

1 code implementation8 Jul 2024 Jorge da Silva Goncalves, Laura Manduchi, Moritz Vandenhirtz, Julia E. Vogt

This paper introduces Diffuse-TreeVAE, a deep generative model that integrates hierarchical clustering into the framework of Denoising Diffusion Probabilistic Models (DDPMs).

Clustering Denoising

Stochastic Concept Bottleneck Models

1 code implementation27 Jun 2024 Moritz Vandenhirtz, Sonia Laguna, Ričards Marcinkevičs, Julia E. Vogt

Concept Bottleneck Models (CBMs) have emerged as a promising interpretable method whose final prediction is based on intermediate, human-understandable concepts rather than the raw input.

scTree: Discovering Cellular Hierarchies in the Presence of Batch Effects in scRNA-seq Data

1 code implementation27 Jun 2024 Moritz Vandenhirtz, Florian Barkmann, Laura Manduchi, Julia E. Vogt, Valentina Boeva

We propose a novel method, scTree, for single-cell Tree Variational Autoencoders, extending a hierarchical clustering approach to single-cell RNA sequencing data.

Clustering

Anomaly Detection by Context Contrasting

no code implementations29 May 2024 Alain Ryser, Thomas M. Sutter, Alexander Marx, Julia E. Vogt

At test time, representations of anomalies that do not adhere to the invariances of normal data then deviate from their respective context cluster.

Self-Supervised Anomaly Detection Self-Supervised Learning +1

Benchmarking the Fairness of Image Upsampling Methods

1 code implementation24 Jan 2024 Mike Laszkiewicz, Imant Daunhawer, Julia E. Vogt, Asja Fischer, Johannes Lederer

Recent years have witnessed a rapid development of deep generative models for creating synthetic media, such as images and videos.

Benchmarking Diversity +1

This Reads Like That: Deep Learning for Interpretable Natural Language Processing

1 code implementation25 Oct 2023 Claudio Fanconi, Moritz Vandenhirtz, Severin Husmann, Julia E. Vogt

Prototype learning, a popular machine learning method designed for inherently interpretable decisions, leverages similarities to learned prototypes for classifying new data.

Deep Learning Sentence +1

On the Properties and Estimation of Pointwise Mutual Information Profiles

1 code implementation16 Oct 2023 Paweł Czyż, Frederic Grabowski, Julia E. Vogt, Niko Beerenwinkel, Alexander Marx

The pointwise mutual information profile, or simply profile, is the distribution of pointwise mutual information for a given pair of random variables.

Mutual Information Estimation Uncertainty Quantification

M(otion)-mode Based Prediction of Ejection Fraction using Echocardiograms

1 code implementation7 Sep 2023 Ece Ozkan, Thomas M. Sutter, Yurong Hu, Sebastian Balzer, Julia E. Vogt

Early detection of cardiac dysfunction through routine screening is vital for diagnosing cardiovascular diseases.

Contrastive Learning Diagnostic

Beyond Normal: On the Evaluation of Mutual Information Estimators

2 code implementations NeurIPS 2023 Paweł Czyż, Frederic Grabowski, Julia E. Vogt, Niko Beerenwinkel, Alexander Marx

Mutual information is a general statistical dependency measure which has found applications in representation learning, causality, domain generalization and computational biology.

Benchmarking Domain Generalization +1

(Un)reasonable Allure of Ante-hoc Interpretability for High-stakes Domains: Transparency Is Necessary but Insufficient for Comprehensibility

no code implementations4 Jun 2023 Kacper Sokol, Julia E. Vogt

Ante-hoc interpretability has become the holy grail of explainable artificial intelligence for high-stakes domains such as healthcare; however, this notion is elusive, lacks a widely-accepted definition and depends on the operational context.

Explainable artificial intelligence Navigate

Signal Is Harder To Learn Than Bias: Debiasing with Focal Loss

1 code implementation31 May 2023 Moritz Vandenhirtz, Laura Manduchi, Ričards Marcinkevičs, Julia E. Vogt

We propose Signal is Harder (SiH), a variational-autoencoder-based method that simultaneously trains a biased and unbiased classifier using a novel, disentangling reweighting scheme inspired by the focal loss.

Decision Making Domain Generalization +1

Differentiable Random Partition Models

1 code implementation NeurIPS 2023 Thomas M. Sutter, Alain Ryser, Joram Liebeskind, Julia E. Vogt

Partitioning a set of elements into an unknown number of mutually exclusive subsets is essential in many machine learning problems.

Variational Inference

Identifiability Results for Multimodal Contrastive Learning

1 code implementation16 Mar 2023 Imant Daunhawer, Alice Bizeul, Emanuele Palumbo, Alexander Marx, Julia E. Vogt

Our work generalizes previous identifiability results by redefining the generative process in terms of distinct mechanisms with modality-specific latent variables.

Contrastive Learning Representation Learning

Introduction to Machine Learning for Physicians: A Survival Guide for Data Deluge

no code implementations23 Dec 2022 Ričards Marcinkevičs, Ece Ozkan, Julia E. Vogt

Many modern research fields increasingly rely on collecting and analysing massive, often unstructured, and unwieldy datasets.

On the Identifiability and Estimation of Causal Location-Scale Noise Models

1 code implementation13 Oct 2022 Alexander Immer, Christoph Schultheiss, Julia E. Vogt, Bernhard Schölkopf, Peter Bühlmann, Alexander Marx

We study the class of location-scale or heteroscedastic noise models (LSNMs), in which the effect $Y$ can be written as a function of the cause $X$ and a noise source $N$ independent of $X$, which may be scaled by a positive function $g$ over the cause, i. e., $Y = f(X) + g(X)N$.

Causal Discovery Causal Inference

Debiasing Deep Chest X-Ray Classifiers using Intra- and Post-processing Methods

1 code implementation26 Jul 2022 Ričards Marcinkevičs, Ece Ozkan, Julia E. Vogt

In addition, we compare several intra- and post-processing approaches applied to debiasing deep chest X-ray classifiers.

Attribute Decision Making +1

Anomaly Detection in Echocardiograms with Dynamic Variational Trajectory Models

1 code implementation30 Jun 2022 Alain Ryser, Laura Manduchi, Fabian Laumer, Holger Michel, Sven Wellmann, Julia E. Vogt

The introduced method takes advantage of the periodic nature of the heart cycle to learn three variants of a variational latent trajectory model (TVAE).

Anomaly Detection

How Robust is Unsupervised Representation Learning to Distribution Shift?

no code implementations17 Jun 2022 Yuge Shi, Imant Daunhawer, Julia E. Vogt, Philip H. S. Torr, Amartya Sanyal

As such, there is a lack of insight on the robustness of the representations learned from unsupervised methods, such as self-supervised learning (SSL) and auto-encoder based algorithms (AE), to distribution shift.

Representation Learning Self-Supervised Learning

Learning Group Importance using the Differentiable Hypergeometric Distribution

1 code implementation3 Mar 2022 Thomas M. Sutter, Laura Manduchi, Alain Ryser, Julia E. Vogt

We introduce reparameterizable gradients to learn the importance between groups and highlight the advantage of explicitly learning the size of subsets in two typical applications: weakly-supervised learning and clustering.

Clustering Selection bias +1

On the Limitations of Multimodal VAEs

no code implementations NeurIPS Workshop ICBINB 2021 Imant Daunhawer, Thomas M. Sutter, Kieran Chin-Cheong, Emanuele Palumbo, Julia E. Vogt

Multimodal variational autoencoders (VAEs) have shown promise as efficient generative models for weakly-supervised data.

Deep Conditional Gaussian Mixture Model for Constrained Clustering

1 code implementation NeurIPS 2021 Laura Manduchi, Kieran Chin-Cheong, Holger Michel, Sven Wellmann, Julia E. Vogt

Constrained clustering has gained significant attention in the field of machine learning as it can leverage prior information on a growing amount of only partially labeled data.

Constrained Clustering model +1

Generalized Multimodal ELBO

1 code implementation ICLR 2021 Thomas M. Sutter, Imant Daunhawer, Julia E. Vogt

Multiple data types naturally co-occur when describing real-world phenomena and learning from them is a long-standing goal in machine learning research.

Multimodal Generative Learning Utilizing Jensen-Shannon-Divergence

1 code implementation NeurIPS 2020 Thomas M. Sutter, Imant Daunhawer, Julia E. Vogt

Learning from different data types is a long-standing goal in machine learning research, as multiple information sources co-occur when describing natural phenomena.

Generation of Differentially Private Heterogeneous Electronic Health Records

no code implementations5 Jun 2020 Kieran Chin-Cheong, Thomas Sutter, Julia E. Vogt

In this work, we explore using Generative Adversarial Networks to generate synthetic, heterogeneous EHRs with the goal of using these synthetic records in place of existing data sets for downstream classification tasks.

BIG-bench Machine Learning Binary Classification +2

Unsupervised Extraction of Phenotypes from Cancer Clinical Notes for Association Studies

no code implementations29 Apr 2019 Stefan G. Stark, Stephanie L. Hyland, Melanie F. Pradier, Kjong Lehmann, Andreas Wicki, Fernando Perez Cruz, Julia E. Vogt, Gunnar Rätsch

To demonstrate the utility of our approach, we perform an association study of clinical features with somatic mutation profiles from 4, 007 cancer patients and their tumors.

Clustering

Probabilistic Clustering of Time-Evolving Distance Data

no code implementations14 Apr 2015 Julia E. Vogt, Marius Kloft, Stefan Stark, Sudhir S. Raman, Sandhya Prabhakaran, Volker Roth, Gunnar Rätsch

We present a novel probabilistic clustering model for objects that are represented via pairwise distances and observed at different time points.

Clustering

Cannot find the paper you are looking for? You can Submit a new open access paper.