Search Results for author: Frederik Träuble

Found 10 papers, 2 papers with code

Compositional Multi-Object Reinforcement Learning with Linear Relation Networks

no code implementations31 Jan 2022 Davide Mambelli, Frederik Träuble, Stefan Bauer, Bernhard Schölkopf, Francesco Locatello

Although reinforcement learning has seen remarkable progress over the last years, solving robust dexterous object-manipulation tasks in multi-object settings remains a challenge.

reinforcement-learning

Visual Representation Learning Does Not Generalize Strongly Within the Same Domain

1 code implementation ICLR 2022 Lukas Schott, Julius von Kügelgen, Frederik Träuble, Peter Gehler, Chris Russell, Matthias Bethge, Bernhard Schölkopf, Francesco Locatello, Wieland Brendel

An important component for generalization in machine learning is to uncover underlying latent factors of variation as well as the mechanism through which each factor acts in the world.

Representation Learning

The Role of Pretrained Representations for the OOD Generalization of Reinforcement Learning Agents

no code implementations ICLR 2022 Andrea Dittadi, Frederik Träuble, Manuel Wüthrich, Felix Widmaier, Peter Gehler, Ole Winther, Francesco Locatello, Olivier Bachem, Bernhard Schölkopf, Stefan Bauer

By training 240 representations and over 10, 000 reinforcement learning (RL) policies on a simulated robotic setup, we evaluate to what extent different properties of pretrained VAE-based representations affect the OOD generalization of downstream agents.

reinforcement-learning Representation Learning

An improved equation of state for air plasma simulations

no code implementations8 Jan 2021 Frederik Träuble, Stephen Millmore, Nikolaos Nikiforakis

Coupled with a suitable interpolation procedure this offers an accurate and computationally efficient technique for simulating partially ionised air plasma.

Computational Physics Applied Physics Fluid Dynamics Plasma Physics

On the Transfer of Disentangled Representations in Realistic Settings

no code implementations ICLR 2021 Andrea Dittadi, Frederik Träuble, Francesco Locatello, Manuel Wüthrich, Vaibhav Agrawal, Ole Winther, Stefan Bauer, Bernhard Schölkopf

Learning meaningful representations that disentangle the underlying structure of the data generating process is considered to be of key importance in machine learning.

Disentanglement

CausalWorld: A Robotic Manipulation Benchmark for Causal Structure and Transfer Learning

no code implementations ICLR 2021 Ossama Ahmed, Frederik Träuble, Anirudh Goyal, Alexander Neitz, Yoshua Bengio, Bernhard Schölkopf, Manuel Wüthrich, Stefan Bauer

To facilitate research addressing this problem, we propose CausalWorld, a benchmark for causal structure and transfer learning in a robotic manipulation environment.

Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.