Search Results for author: Andrea Dittadi

Found 23 papers, 11 papers with code

DiffEnc: Variational Diffusion with a Learned Encoder

1 code implementation30 Oct 2023 Beatrix M. G. Nielsen, Anders Christensen, Andrea Dittadi, Ole Winther

Diffusion models may be viewed as hierarchical variational autoencoders (VAEs) with two improvements: parameter sharing for the conditional distributions in the generative process and efficient computation of the loss as independent terms over the hierarchy.

On the Generalization of Learned Structured Representations

no code implementations25 Apr 2023 Andrea Dittadi

In representation learning, large datasets are leveraged to learn generic data representations that may be useful for efficient learning of arbitrary downstream tasks.

Out-of-Distribution Generalization Representation Learning +1

Assessing Neural Network Robustness via Adversarial Pivotal Tuning

no code implementations17 Nov 2022 Peter Ebert Christensen, Vésteinn Snæbjarnarson, Andrea Dittadi, Serge Belongie, Sagie Benaim

We demonstrate that APT is capable of a wide range of class-preserving semantic image manipulations that fool a variety of pretrained classifiers.


Assaying Out-Of-Distribution Generalization in Transfer Learning

1 code implementation19 Jul 2022 Florian Wenzel, Andrea Dittadi, Peter Vincent Gehler, Carl-Johann Simon-Gabriel, Max Horn, Dominik Zietlow, David Kernert, Chris Russell, Thomas Brox, Bernt Schiele, Bernhard Schölkopf, Francesco Locatello

Since out-of-distribution generalization is a generally ill-posed problem, various proxy targets (e. g., calibration, adversarial robustness, algorithmic corruptions, invariance across shifts) were studied across different research programs resulting in different recommendations.

Adversarial Robustness Out-of-Distribution Generalization +1

Inductive Biases for Object-Centric Representations in the Presence of Complex Textures

no code implementations18 Apr 2022 Samuele Papa, Ole Winther, Andrea Dittadi

Understanding which inductive biases could be helpful for the unsupervised learning of object-centric representations of natural scenes is challenging.

Object Segmentation +1

Conditional Generation of Medical Time Series for Extrapolation to Underrepresented Populations

1 code implementation20 Jan 2022 Simon Bing, Andrea Dittadi, Stefan Bauer, Patrick Schwab

We demonstrate experimentally that HealthGen generates synthetic cohorts that are significantly more faithful to real patient EHRs than the current state-of-the-art, and that augmenting real data sets with conditionally generated cohorts of underrepresented subpopulations of patients can significantly enhance the generalisability of models derived from these data sets to different patient populations.

Time Series Time Series Analysis

The Role of Pretrained Representations for the OOD Generalization of Reinforcement Learning Agents

no code implementations ICLR 2022 Andrea Dittadi, Frederik Träuble, Manuel Wüthrich, Felix Widmaier, Peter Gehler, Ole Winther, Francesco Locatello, Olivier Bachem, Bernhard Schölkopf, Stefan Bauer

By training 240 representations and over 10, 000 reinforcement learning (RL) policies on a simulated robotic setup, we evaluate to what extent different properties of pretrained VAE-based representations affect the OOD generalization of downstream agents.

Reinforcement Learning (RL) Representation Learning

Generalization and Robustness Implications in Object-Centric Learning

1 code implementation1 Jul 2021 Andrea Dittadi, Samuele Papa, Michele De Vita, Bernhard Schölkopf, Ole Winther, Francesco Locatello

The idea behind object-centric representation learning is that natural scenes can better be modeled as compositions of objects and their relations as opposed to distributed representations.

Inductive Bias Object +3

Planning from Pixels in Atari with Learned Symbolic Representations

1 code implementation16 Dec 2020 Andrea Dittadi, Frederik K. Drachmann, Thomas Bolander

Width-based planning methods have been shown to yield state-of-the-art performance in the Atari 2600 domain using pixel input.

Optimal Variance Control of the Score-Function Gradient Estimator for Importance-Weighted Bounds

1 code implementation NeurIPS 2020 Valentin Liévin, Andrea Dittadi, Anders Christensen, Ole Winther

Empirically, for the training of both continuous and discrete generative models, the proposed method yields superior variance reduction, resulting in an SNR for IWAE that increases with $K$ without relying on the reparameterization trick.

On the Transfer of Disentangled Representations in Realistic Settings

no code implementations ICLR 2021 Andrea Dittadi, Frederik Träuble, Francesco Locatello, Manuel Wüthrich, Vaibhav Agrawal, Ole Winther, Stefan Bauer, Bernhard Schölkopf

Learning meaningful representations that disentangle the underlying structure of the data generating process is considered to be of key importance in machine learning.


Optimal Variance Control of the Score Function Gradient Estimator for Importance Weighted Bounds

1 code implementation5 Aug 2020 Valentin Liévin, Andrea Dittadi, Anders Christensen, Ole Winther

This paper introduces novel results for the score function gradient estimator of the importance weighted variational bound (IWAE).

Towards Hierarchical Discrete Variational Autoencoders

no code implementations pproximateinference AABI Symposium 2019 Valentin Liévin, Andrea Dittadi, Lars Maaløe, Ole Winther

We introduce the Hierarchical Discrete Variational Autoencoder (HD-VAE): a hi- erarchy of variational memory layers.

Semi-Supervised Variational Autoencoder for Survival Prediction

1 code implementation10 Oct 2019 Sveinn Pálsson, Stefano Cerri, Andrea Dittadi, Koen van Leemput

In this paper we propose a semi-supervised variational autoencoder for classification of overall survival groups from tumor segmentation masks.

Brain Tumor Segmentation Segmentation +3

LAVAE: Disentangling Location and Appearance

no code implementations25 Sep 2019 Andrea Dittadi, Ole Winther

We propose a probabilistic generative model for unsupervised learning of structured, interpretable, object-based representations of visual scenes.

Object Variational Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.