Search Results for author: Maximilian Seitzer

Found 9 papers, 8 papers with code

On the Pitfalls of Heteroscedastic Uncertainty Estimation with Probabilistic Neural Networks

2 code implementations ICLR 2022 Maximilian Seitzer, Arash Tavakoli, Dimitrije Antic, Georg Martius

In this work, we examine this approach and identify potential hazards associated with the use of log-likelihood in conjunction with gradient-based optimizers.

Adversarial and Perceptual Refinement for Compressed Sensing MRI Reconstruction

1 code implementation28 Jun 2018 Maximilian Seitzer, Guang Yang, Jo Schlemper, Ozan Oktay, Tobias Würfl, Vincent Christlein, Tom Wong, Raad Mohiaddin, David Firmin, Jennifer Keegan, Daniel Rueckert, Andreas Maier

In addition, we introduce a semantic interpretability score, measuring the visibility of the region of interest in both ground truth and reconstructed images, which allows us to objectively quantify the usefulness of the image quality for image post-processing and analysis.

MRI Reconstruction Open-Ended Question Answering

Self-supervised Visual Reinforcement Learning with Object-centric Representations

1 code implementation ICLR 2021 Andrii Zadaianchuk, Maximilian Seitzer, Georg Martius

We show that the structure in the representations in combination with goal-conditioned attention policies helps the autonomous agent to discover and learn useful skills.

Object reinforcement-learning +1

Object-Centric Learning for Real-World Videos by Predicting Temporal Feature Similarities

1 code implementation NeurIPS 2023 Andrii Zadaianchuk, Maximilian Seitzer, Georg Martius

Recently, it was shown that the reconstruction of pre-trained self-supervised features leads to object-centric representations on unconstrained real-world image datasets.

Object Object Discovery

NeurIPS 2019 Disentanglement Challenge: Improved Disentanglement through Aggregated Convolutional Feature Maps

1 code implementation23 Feb 2020 Maximilian Seitzer

This report to our stage 1 submission to the NeurIPS 2019 disentanglement challenge presents a simple image preprocessing method for training VAEs leading to improved disentanglement compared to directly using the images.

Disentanglement

NeurIPS 2019 Disentanglement Challenge: Improved Disentanglement through Learned Aggregation of Convolutional Feature Maps

1 code implementation27 Feb 2020 Maximilian Seitzer, Andreas Foltyn, Felix P. Kemeth

This report to our stage 2 submission to the NeurIPS 2019 disentanglement challenge presents a simple image preprocessing method for learning disentangled latent factors.

Disentanglement Inductive Bias +1

DyST: Towards Dynamic Neural Scene Representations on Real-World Videos

no code implementations9 Oct 2023 Maximilian Seitzer, Sjoerd van Steenkiste, Thomas Kipf, Klaus Greff, Mehdi S. M. Sajjadi

Our Dynamic Scene Transformer (DyST) model leverages recent work in neural scene representation to learn a latent decomposition of monocular real-world videos into scene content, per-view scene dynamics, and camera pose.

Cannot find the paper you are looking for? You can Submit a new open access paper.