no code implementations • 9 Nov 2023 • Amir Mohammad Karimi Mamaghan, Andrea Dittadi, Stefan Bauer, Karl Henrik Johansson, Francesco Quinzan
Causal reasoning can be considered a cornerstone of intelligent systems.
1 code implementation • 30 Oct 2023 • Beatrix M. G. Nielsen, Anders Christensen, Andrea Dittadi, Ole Winther
Diffusion models may be viewed as hierarchical variational autoencoders (VAEs) with two improvements: parameter sharing for the conditional distributions in the generative process and efficient computation of the loss as independent terms over the hierarchy.
no code implementations • 25 Apr 2023 • Andrea Dittadi
In representation learning, large datasets are leveraged to learn generic data representations that may be useful for efficient learning of arbitrary downstream tasks.
Out-of-Distribution Generalization Representation Learning +1
no code implementations • 17 Nov 2022 • Peter Ebert Christensen, Vésteinn Snæbjarnarson, Andrea Dittadi, Serge Belongie, Sagie Benaim
We demonstrate that APT is capable of a wide range of class-preserving semantic image manipulations that fool a variety of pretrained classifiers.
1 code implementation • 1 Oct 2022 • Cian Eastwood, Andrei Liviu Nicolicioiu, Julius von Kügelgen, Armin Kekić, Frederik Träuble, Andrea Dittadi, Bernhard Schölkopf
In representation learning, a common approach is to seek representations which disentangle the underlying factors of variation.
1 code implementation • 19 Jul 2022 • Florian Wenzel, Andrea Dittadi, Peter Vincent Gehler, Carl-Johann Simon-Gabriel, Max Horn, Dominik Zietlow, David Kernert, Chris Russell, Thomas Brox, Bernt Schiele, Bernhard Schölkopf, Francesco Locatello
Since out-of-distribution generalization is a generally ill-posed problem, various proxy targets (e. g., calibration, adversarial robustness, algorithmic corruptions, invariance across shifts) were studied across different research programs resulting in different recommendations.
Adversarial Robustness Out-of-Distribution Generalization +1
1 code implementation • 15 Jun 2022 • Tobias Höppe, Arash Mehrjou, Stefan Bauer, Didrik Nielsen, Andrea Dittadi
By varying the mask we condition on, the model is able to perform video prediction, infilling, and upsampling.
Ranked #2 on Video Generation on BAIR Robot Pushing
no code implementations • 18 Apr 2022 • Samuele Papa, Ole Winther, Andrea Dittadi
Understanding which inductive biases could be helpful for the unsupervised learning of object-centric representations of natural scenes is challenging.
no code implementations • 17 Mar 2022 • Darius Chira, Ilian Haralampiev, Ole Winther, Andrea Dittadi, Valentin Liévin
Image super-resolution (SR) techniques are used to generate a high-resolution image from a low-resolution image.
1 code implementation • 20 Jan 2022 • Simon Bing, Andrea Dittadi, Stefan Bauer, Patrick Schwab
We demonstrate experimentally that HealthGen generates synthetic cohorts that are significantly more faithful to real patient EHRs than the current state-of-the-art, and that augmenting real data sets with conditionally generated cohorts of underrepresented subpopulations of patients can significantly enhance the generalisability of models derived from these data sets to different patient populations.
no code implementations • NeurIPS Workshop SVRHM 2021 • Yukun Chen, Andrea Dittadi, Frederik Träuble, Stefan Bauer, Bernhard Schölkopf
Disentanglement is hypothesized to be beneficial towards a number of downstream tasks.
no code implementations • ICLR 2022 • Andrea Dittadi, Frederik Träuble, Manuel Wüthrich, Felix Widmaier, Peter Gehler, Ole Winther, Francesco Locatello, Olivier Bachem, Bernhard Schölkopf, Stefan Bauer
By training 240 representations and over 10, 000 reinforcement learning (RL) policies on a simulated robotic setup, we evaluate to what extent different properties of pretrained VAE-based representations affect the OOD generalization of downstream agents.
1 code implementation • 1 Jul 2021 • Andrea Dittadi, Samuele Papa, Michele De Vita, Bernhard Schölkopf, Ole Winther, Francesco Locatello
The idea behind object-centric representation learning is that natural scenes can better be modeled as compositions of objects and their relations as opposed to distributed representations.
no code implementations • ICML Workshop URL 2021 • Frederik Träuble, Andrea Dittadi, Manuel Wuthrich, Felix Widmaier, Peter Vincent Gehler, Ole Winther, Francesco Locatello, Olivier Bachem, Bernhard Schölkopf, Stefan Bauer
Learning data representations that are useful for various downstream tasks is a cornerstone of artificial intelligence.
Out-of-Distribution Generalization reinforcement-learning +2
no code implementations • ICCV 2021 • Andrea Dittadi, Sebastian Dziadzio, Darren Cosker, Ben Lundell, Thomas J. Cashman, Jamie Shotton
The increased availability and maturity of head-mounted and wearable devices opens up opportunities for remote communication and collaboration.
1 code implementation • 16 Dec 2020 • Andrea Dittadi, Frederik K. Drachmann, Thomas Bolander
Width-based planning methods have been shown to yield state-of-the-art performance in the Atari 2600 domain using pixel input.
1 code implementation • NeurIPS 2020 • Valentin Liévin, Andrea Dittadi, Anders Christensen, Ole Winther
Empirically, for the training of both continuous and discrete generative models, the proposed method yields superior variance reduction, resulting in an SNR for IWAE that increases with $K$ without relying on the reparameterization trick.
no code implementations • ICLR 2021 • Andrea Dittadi, Frederik Träuble, Francesco Locatello, Manuel Wüthrich, Vaibhav Agrawal, Ole Winther, Stefan Bauer, Bernhard Schölkopf
Learning meaningful representations that disentangle the underlying structure of the data generating process is considered to be of key importance in machine learning.
1 code implementation • 5 Aug 2020 • Valentin Liévin, Andrea Dittadi, Anders Christensen, Ole Winther
This paper introduces novel results for the score function gradient estimator of the importance weighted variational bound (IWAE).
2 code implementations • 14 Jun 2020 • Frederik Träuble, Elliot Creager, Niki Kilbertus, Francesco Locatello, Andrea Dittadi, Anirudh Goyal, Bernhard Schölkopf, Stefan Bauer
The focus of disentanglement approaches has been on identifying independent factors of variation in data.
no code implementations • pproximateinference AABI Symposium 2019 • Valentin Liévin, Andrea Dittadi, Lars Maaløe, Ole Winther
We introduce the Hierarchical Discrete Variational Autoencoder (HD-VAE): a hi- erarchy of variational memory layers.
1 code implementation • 10 Oct 2019 • Sveinn Pálsson, Stefano Cerri, Andrea Dittadi, Koen van Leemput
In this paper we propose a semi-supervised variational autoencoder for classification of overall survival groups from tumor segmentation masks.
no code implementations • 25 Sep 2019 • Andrea Dittadi, Ole Winther
We propose a probabilistic generative model for unsupervised learning of structured, interpretable, object-based representations of visual scenes.