Search Results for author: Julia E. Vogt

Found 30 papers, 19 papers with code

Anomaly Detection by Context Contrasting

no code implementations29 May 2024 Alain Ryser, Thomas M. Sutter, Alexander Marx, Julia E. Vogt

Yet, in many real-world applications, we do not know what to expect from unseen data, and we can solely leverage knowledge about normal data.

Self-Supervised Anomaly Detection Self-Supervised Learning +1

Benchmarking the Fairness of Image Upsampling Methods

1 code implementation24 Jan 2024 Mike Laszkiewicz, Imant Daunhawer, Julia E. Vogt, Asja Fischer, Johannes Lederer

Recent years have witnessed a rapid development of deep generative models for creating synthetic media, such as images and videos.

Benchmarking Fairness

This Reads Like That: Deep Learning for Interpretable Natural Language Processing

1 code implementation25 Oct 2023 Claudio Fanconi, Moritz Vandenhirtz, Severin Husmann, Julia E. Vogt

Prototype learning, a popular machine learning method designed for inherently interpretable decisions, leverages similarities to learned prototypes for classifying new data.

Sentence Sentence Embeddings

On the Properties and Estimation of Pointwise Mutual Information Profiles

1 code implementation16 Oct 2023 Paweł Czyż, Frederic Grabowski, Julia E. Vogt, Niko Beerenwinkel, Alexander Marx

The pointwise mutual information profile, or simply profile, is the distribution of pointwise mutual information for a given pair of random variables.

Mutual Information Estimation Uncertainty Quantification

M(otion)-mode Based Prediction of Ejection Fraction using Echocardiograms

1 code implementation7 Sep 2023 Ece Ozkan, Thomas M. Sutter, Yurong Hu, Sebastian Balzer, Julia E. Vogt

Early detection of cardiac dysfunction through routine screening is vital for diagnosing cardiovascular diseases.

Contrastive Learning

Beyond Normal: On the Evaluation of Mutual Information Estimators

2 code implementations NeurIPS 2023 Paweł Czyż, Frederic Grabowski, Julia E. Vogt, Niko Beerenwinkel, Alexander Marx

Mutual information is a general statistical dependency measure which has found applications in representation learning, causality, domain generalization and computational biology.

Benchmarking Domain Generalization +1

(Un)reasonable Allure of Ante-hoc Interpretability for High-stakes Domains: Transparency Is Necessary but Insufficient for Comprehensibility

no code implementations4 Jun 2023 Kacper Sokol, Julia E. Vogt

Ante-hoc interpretability has become the holy grail of explainable artificial intelligence for high-stakes domains such as healthcare; however, this notion is elusive, lacks a widely-accepted definition and depends on the operational context.

Explainable artificial intelligence Navigate

Signal Is Harder To Learn Than Bias: Debiasing with Focal Loss

1 code implementation31 May 2023 Moritz Vandenhirtz, Laura Manduchi, Ričards Marcinkevičs, Julia E. Vogt

We propose Signal is Harder (SiH), a variational-autoencoder-based method that simultaneously trains a biased and unbiased classifier using a novel, disentangling reweighting scheme inspired by the focal loss.

Decision Making Domain Generalization +1

Differentiable Random Partition Models

1 code implementation NeurIPS 2023 Thomas M. Sutter, Alain Ryser, Joram Liebeskind, Julia E. Vogt

Partitioning a set of elements into an unknown number of mutually exclusive subsets is essential in many machine learning problems.

Variational Inference

Identifiability Results for Multimodal Contrastive Learning

1 code implementation16 Mar 2023 Imant Daunhawer, Alice Bizeul, Emanuele Palumbo, Alexander Marx, Julia E. Vogt

Our work generalizes previous identifiability results by redefining the generative process in terms of distinct mechanisms with modality-specific latent variables.

Contrastive Learning Representation Learning

Introduction to Machine Learning for Physicians: A Survival Guide for Data Deluge

no code implementations23 Dec 2022 Ričards Marcinkevičs, Ece Ozkan, Julia E. Vogt

Many modern research fields increasingly rely on collecting and analysing massive, often unstructured, and unwieldy datasets.

On the Identifiability and Estimation of Causal Location-Scale Noise Models

1 code implementation13 Oct 2022 Alexander Immer, Christoph Schultheiss, Julia E. Vogt, Bernhard Schölkopf, Peter Bühlmann, Alexander Marx

We study the class of location-scale or heteroscedastic noise models (LSNMs), in which the effect $Y$ can be written as a function of the cause $X$ and a noise source $N$ independent of $X$, which may be scaled by a positive function $g$ over the cause, i. e., $Y = f(X) + g(X)N$.

Causal Discovery Causal Inference

Debiasing Deep Chest X-Ray Classifiers using Intra- and Post-processing Methods

1 code implementation26 Jul 2022 Ričards Marcinkevičs, Ece Ozkan, Julia E. Vogt

In addition, we compare several intra- and post-processing approaches applied to debiasing deep chest X-ray classifiers.

Attribute Decision Making +1

Anomaly Detection in Echocardiograms with Dynamic Variational Trajectory Models

1 code implementation30 Jun 2022 Alain Ryser, Laura Manduchi, Fabian Laumer, Holger Michel, Sven Wellmann, Julia E. Vogt

The introduced method takes advantage of the periodic nature of the heart cycle to learn three variants of a variational latent trajectory model (TVAE).

Anomaly Detection

How Robust is Unsupervised Representation Learning to Distribution Shift?

no code implementations17 Jun 2022 Yuge Shi, Imant Daunhawer, Julia E. Vogt, Philip H. S. Torr, Amartya Sanyal

As such, there is a lack of insight on the robustness of the representations learned from unsupervised methods, such as self-supervised learning (SSL) and auto-encoder based algorithms (AE), to distribution shift.

Representation Learning Self-Supervised Learning

Learning Group Importance using the Differentiable Hypergeometric Distribution

1 code implementation3 Mar 2022 Thomas M. Sutter, Laura Manduchi, Alain Ryser, Julia E. Vogt

We introduce reparameterizable gradients to learn the importance between groups and highlight the advantage of explicitly learning the size of subsets in two typical applications: weakly-supervised learning and clustering.

Clustering Selection bias +1

On the Limitations of Multimodal VAEs

no code implementations NeurIPS Workshop ICBINB 2021 Imant Daunhawer, Thomas M. Sutter, Kieran Chin-Cheong, Emanuele Palumbo, Julia E. Vogt

Multimodal variational autoencoders (VAEs) have shown promise as efficient generative models for weakly-supervised data.

Deep Conditional Gaussian Mixture Model for Constrained Clustering

1 code implementation NeurIPS 2021 Laura Manduchi, Kieran Chin-Cheong, Holger Michel, Sven Wellmann, Julia E. Vogt

Constrained clustering has gained significant attention in the field of machine learning as it can leverage prior information on a growing amount of only partially labeled data.

Constrained Clustering Variational Inference

Generalized Multimodal ELBO

1 code implementation ICLR 2021 Thomas M. Sutter, Imant Daunhawer, Julia E. Vogt

Multiple data types naturally co-occur when describing real-world phenomena and learning from them is a long-standing goal in machine learning research.

Multimodal Generative Learning Utilizing Jensen-Shannon-Divergence

1 code implementation NeurIPS 2020 Thomas M. Sutter, Imant Daunhawer, Julia E. Vogt

Learning from different data types is a long-standing goal in machine learning research, as multiple information sources co-occur when describing natural phenomena.

Generation of Differentially Private Heterogeneous Electronic Health Records

no code implementations5 Jun 2020 Kieran Chin-Cheong, Thomas Sutter, Julia E. Vogt

In this work, we explore using Generative Adversarial Networks to generate synthetic, heterogeneous EHRs with the goal of using these synthetic records in place of existing data sets for downstream classification tasks.

BIG-bench Machine Learning Binary Classification +2

Unsupervised Extraction of Phenotypes from Cancer Clinical Notes for Association Studies

no code implementations29 Apr 2019 Stefan G. Stark, Stephanie L. Hyland, Melanie F. Pradier, Kjong Lehmann, Andreas Wicki, Fernando Perez Cruz, Julia E. Vogt, Gunnar Rätsch

To demonstrate the utility of our approach, we perform an association study of clinical features with somatic mutation profiles from 4, 007 cancer patients and their tumors.


Probabilistic Clustering of Time-Evolving Distance Data

no code implementations14 Apr 2015 Julia E. Vogt, Marius Kloft, Stefan Stark, Sudhir S. Raman, Sandhya Prabhakaran, Volker Roth, Gunnar Rätsch

We present a novel probabilistic clustering model for objects that are represented via pairwise distances and observed at different time points.


Cannot find the paper you are looking for? You can Submit a new open access paper.