no code implementations • 13 Jan 2025 • Frederik Träuble, Lachlan Stuart, Andreas Georgiou, Pascal Notin, Arash Mehrjou, Ron Schwessinger, Mathieu Chevalley, Kim Branson, Bernhard Schölkopf, Cornelia van Duijn, Debora Marks, Patrick Schwab
Here, we present Phenformer, a multi-scale genetic language model that learns to generate mechanistic hypotheses as to how differences in genome sequence lead to disease-relevant changes in expression across cell types and tissues directly from DNA sequences of up to 88 million base pairs.
no code implementations • 26 Nov 2023 • Vedant Shah, Frederik Träuble, Ashish Malik, Hugo Larochelle, Michael Mozer, Sanjeev Arora, Yoshua Bengio, Anirudh Goyal
Machine \emph{unlearning}, which involves erasing knowledge about a \emph{forget set} from a trained model, can prove to be costly and infeasible by existing techniques.
no code implementations • 4 Nov 2022 • Nasim Rahaman, Martin Weiss, Frederik Träuble, Francesco Locatello, Alexandre Lacoste, Yoshua Bengio, Chris Pal, Li Erran Li, Bernhard Schölkopf
Geospatial Information Systems are used by researchers and Humanitarian Assistance and Disaster Response (HADR) practitioners to support a wide variety of important applications.
1 code implementation • 1 Oct 2022 • Cian Eastwood, Andrei Liviu Nicolicioiu, Julius von Kügelgen, Armin Kekić, Frederik Träuble, Andrea Dittadi, Bernhard Schölkopf
In representation learning, a common approach is to seek representations which disentangle the underlying factors of variation.
1 code implementation • 22 Jul 2022 • Frederik Träuble, Anirudh Goyal, Nasim Rahaman, Michael Mozer, Kenji Kawaguchi, Yoshua Bengio, Bernhard Schölkopf
Deep neural networks perform well on classification tasks where data streams are i. i. d.
no code implementations • 31 Jan 2022 • Davide Mambelli, Frederik Träuble, Stefan Bauer, Bernhard Schölkopf, Francesco Locatello
Although reinforcement learning has seen remarkable progress over the last years, solving robust dexterous object-manipulation tasks in multi-object settings remains a challenge.
no code implementations • NeurIPS Workshop SVRHM 2021 • Yukun Chen, Andrea Dittadi, Frederik Träuble, Stefan Bauer, Bernhard Schölkopf
Disentanglement is hypothesized to be beneficial towards a number of downstream tasks.
1 code implementation • ICLR 2022 • Lukas Schott, Julius von Kügelgen, Frederik Träuble, Peter Gehler, Chris Russell, Matthias Bethge, Bernhard Schölkopf, Francesco Locatello, Wieland Brendel
An important component for generalization in machine learning is to uncover underlying latent factors of variation as well as the mechanism through which each factor acts in the world.
no code implementations • ICLR 2022 • Andrea Dittadi, Frederik Träuble, Manuel Wüthrich, Felix Widmaier, Peter Gehler, Ole Winther, Francesco Locatello, Olivier Bachem, Bernhard Schölkopf, Stefan Bauer
By training 240 representations and over 10, 000 reinforcement learning (RL) policies on a simulated robotic setup, we evaluate to what extent different properties of pretrained VAE-based representations affect the OOD generalization of downstream agents.
no code implementations • NeurIPS 2021 • Frederik Träuble, Julius von Kügelgen, Matthäus Kleindessner, Francesco Locatello, Bernhard Schölkopf, Peter Gehler
; and (ii) if the new predictions differ from the current ones, should we update?
no code implementations • ICML Workshop URL 2021 • Frederik Träuble, Andrea Dittadi, Manuel Wuthrich, Felix Widmaier, Peter Vincent Gehler, Ole Winther, Francesco Locatello, Olivier Bachem, Bernhard Schölkopf, Stefan Bauer
Learning data representations that are useful for various downstream tasks is a cornerstone of artificial intelligence.
Out-of-Distribution Generalization
reinforcement-learning
+3
no code implementations • 8 Jan 2021 • Frederik Träuble, Stephen Millmore, Nikolaos Nikiforakis
Coupled with a suitable interpolation procedure this offers an accurate and computationally efficient technique for simulating partially ionised air plasma.
Computational Physics Applied Physics Fluid Dynamics Plasma Physics
no code implementations • ICLR 2021 • Andrea Dittadi, Frederik Träuble, Francesco Locatello, Manuel Wüthrich, Vaibhav Agrawal, Ole Winther, Stefan Bauer, Bernhard Schölkopf
Learning meaningful representations that disentangle the underlying structure of the data generating process is considered to be of key importance in machine learning.
1 code implementation • ICLR 2021 • Ossama Ahmed, Frederik Träuble, Anirudh Goyal, Alexander Neitz, Yoshua Bengio, Bernhard Schölkopf, Manuel Wüthrich, Stefan Bauer
To facilitate research addressing this problem, we propose CausalWorld, a benchmark for causal structure and transfer learning in a robotic manipulation environment.
2 code implementations • 14 Jun 2020 • Frederik Träuble, Elliot Creager, Niki Kilbertus, Francesco Locatello, Andrea Dittadi, Anirudh Goyal, Bernhard Schölkopf, Stefan Bauer
The focus of disentanglement approaches has been on identifying independent factors of variation in data.