Search Results for author: Andrea Dittadi

Found 16 papers, 6 papers with code

Inductive Biases for Object-Centric Representations in the Presence of Complex Textures

no code implementations18 Apr 2022 Samuele Papa, Ole Winther, Andrea Dittadi

Understanding which inductive biases could be helpful for the unsupervised learning of object-centric representations of natural scenes is challenging.

Style Transfer

Conditional Generation of Medical Time Series for Extrapolation to Underrepresented Populations

1 code implementation20 Jan 2022 Simon Bing, Andrea Dittadi, Stefan Bauer, Patrick Schwab

We demonstrate experimentally that HealthGen generates synthetic cohorts that are significantly more faithful to real patient EHRs than the current state-of-the-art, and that augmenting real data sets with conditionally generated cohorts of underrepresented subpopulations of patients can significantly enhance the generalisability of models derived from these data sets to different patient populations.

Time Series

The Role of Pretrained Representations for the OOD Generalization of Reinforcement Learning Agents

no code implementations ICLR 2022 Andrea Dittadi, Frederik Träuble, Manuel Wüthrich, Felix Widmaier, Peter Gehler, Ole Winther, Francesco Locatello, Olivier Bachem, Bernhard Schölkopf, Stefan Bauer

By training 240 representations and over 10, 000 reinforcement learning (RL) policies on a simulated robotic setup, we evaluate to what extent different properties of pretrained VAE-based representations affect the OOD generalization of downstream agents.

reinforcement-learning Representation Learning

Generalization and Robustness Implications in Object-Centric Learning

no code implementations1 Jul 2021 Andrea Dittadi, Samuele Papa, Michele De Vita, Bernhard Schölkopf, Ole Winther, Francesco Locatello

The idea behind object-centric representation learning is that natural scenes can better be modeled as compositions of objects and their relations as opposed to distributed representations.

Representation Learning Systematic Generalization

Planning from Pixels in Atari with Learned Symbolic Representations

1 code implementation16 Dec 2020 Andrea Dittadi, Frederik K. Drachmann, Thomas Bolander

Width-based planning methods have been shown to yield state-of-the-art performance in the Atari 2600 domain using pixel input.

Optimal Variance Control of the Score-Function Gradient Estimator for Importance-Weighted Bounds

1 code implementation NeurIPS 2020 Valentin Liévin, Andrea Dittadi, Anders Christensen, Ole Winther

Empirically, for the training of both continuous and discrete generative models, the proposed method yields superior variance reduction, resulting in an SNR for IWAE that increases with $K$ without relying on the reparameterization trick.

On the Transfer of Disentangled Representations in Realistic Settings

no code implementations ICLR 2021 Andrea Dittadi, Frederik Träuble, Francesco Locatello, Manuel Wüthrich, Vaibhav Agrawal, Ole Winther, Stefan Bauer, Bernhard Schölkopf

Learning meaningful representations that disentangle the underlying structure of the data generating process is considered to be of key importance in machine learning.

Disentanglement

Optimal Variance Control of the Score Function Gradient Estimator for Importance Weighted Bounds

1 code implementation5 Aug 2020 Valentin Liévin, Andrea Dittadi, Anders Christensen, Ole Winther

This paper introduces novel results for the score function gradient estimator of the importance weighted variational bound (IWAE).

Towards Hierarchical Discrete Variational Autoencoders

no code implementations pproximateinference AABI Symposium 2019 Valentin Liévin, Andrea Dittadi, Lars Maaløe, Ole Winther

We introduce the Hierarchical Discrete Variational Autoencoder (HD-VAE): a hi- erarchy of variational memory layers.

Semi-Supervised Variational Autoencoder for Survival Prediction

1 code implementation10 Oct 2019 Sveinn Pálsson, Stefano Cerri, Andrea Dittadi, Koen van Leemput

In this paper we propose a semi-supervised variational autoencoder for classification of overall survival groups from tumor segmentation masks.

Brain Tumor Segmentation Survival Analysis +2

LAVAE: Disentangling Location and Appearance

no code implementations25 Sep 2019 Andrea Dittadi, Ole Winther

We propose a probabilistic generative model for unsupervised learning of structured, interpretable, object-based representations of visual scenes.

Variational Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.