Search Results for author: Imant Daunhawer

Found 6 papers, 3 papers with code

Benchmarking the Fairness of Image Upsampling Methods

no code implementations24 Jan 2024 Mike Laszkiewicz, Imant Daunhawer, Julia E. Vogt, Asja Fischer, Johannes Lederer

Recent years have witnessed a rapid development of deep generative models for creating synthetic media, such as images and videos.

Benchmarking Fairness

Identifiability Results for Multimodal Contrastive Learning

1 code implementation16 Mar 2023 Imant Daunhawer, Alice Bizeul, Emanuele Palumbo, Alexander Marx, Julia E. Vogt

Our work generalizes previous identifiability results by redefining the generative process in terms of distinct mechanisms with modality-specific latent variables.

Contrastive Learning Representation Learning

How Robust is Unsupervised Representation Learning to Distribution Shift?

no code implementations17 Jun 2022 Yuge Shi, Imant Daunhawer, Julia E. Vogt, Philip H. S. Torr, Amartya Sanyal

As such, there is a lack of insight on the robustness of the representations learned from unsupervised methods, such as self-supervised learning (SSL) and auto-encoder based algorithms (AE), to distribution shift.

Representation Learning Self-Supervised Learning

On the Limitations of Multimodal VAEs

no code implementations NeurIPS Workshop ICBINB 2021 Imant Daunhawer, Thomas M. Sutter, Kieran Chin-Cheong, Emanuele Palumbo, Julia E. Vogt

Multimodal variational autoencoders (VAEs) have shown promise as efficient generative models for weakly-supervised data.

Generalized Multimodal ELBO

1 code implementation ICLR 2021 Thomas M. Sutter, Imant Daunhawer, Julia E. Vogt

Multiple data types naturally co-occur when describing real-world phenomena and learning from them is a long-standing goal in machine learning research.

Multimodal Generative Learning Utilizing Jensen-Shannon-Divergence

1 code implementation NeurIPS 2020 Thomas M. Sutter, Imant Daunhawer, Julia E. Vogt

Learning from different data types is a long-standing goal in machine learning research, as multiple information sources co-occur when describing natural phenomena.

Cannot find the paper you are looking for? You can Submit a new open access paper.