1 code implementation • 4 Apr 2024 • Alp Eren Sari, Francesco Locatello, Paolo Favaro
We present two practical improvement techniques for unsupervised segmentation learning.
no code implementations • 13 Mar 2024 • Danru Xu, Dingling Yao, Sébastien Lachapelle, Perouz Taslakian, Julius von Kügelgen, Francesco Locatello, Sara Magliacane
Causal representation learning aims at identifying high-level causal variables from perceptual data.
1 code implementation • 20 Feb 2024 • Adeel Pervez, Francesco Locatello, Efstratios Gavves
This paper presents Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
no code implementations • 20 Feb 2024 • Md Rifat Arefin, Yan Zhang, Aristide Baratin, Francesco Locatello, Irina Rish, Dianbo Liu, Kenji Kawaguchi
Models prone to spurious correlations in training data often produce brittle predictions and introduce unintended biases.
no code implementations • 8 Feb 2024 • Sindy Löwe, Francesco Locatello, Max Welling
In human cognition, the binding problem describes the open question of how the brain flexibly integrates diverse information into cohesive object representations.
1 code implementation • 7 Nov 2023 • Dingling Yao, Danru Xu, Sébastien Lachapelle, Sara Magliacane, Perouz Taslakian, Georg Martius, Julius von Kügelgen, Francesco Locatello
We present a unified framework for studying the identifiability of representations learned from simultaneously observed views, such as different data modalities.
1 code implementation • NeurIPS 2023 • Valentino Maiorca, Luca Moschella, Antonio Norelli, Marco Fumero, Francesco Locatello, Emanuele Rodolà
While different neural models often exhibit latent spaces that are alike when exposed to semantically related data, this intrinsic similarity is not always immediately discernible.
no code implementations • 22 Oct 2023 • Francesco Montagna, Nicoletta Noceti, Lorenzo Rosasco, Francesco Locatello
The use of simulated data in the field of causal discovery is ubiquitous due to the scarcity of annotated real data.
no code implementations • ICCV 2023 • Ke Fan, Zechen Bai, Tianjun Xiao, Dominik Zietlow, Max Horn, Zixu Zhao, Carl-Johann Simon-Gabriel, Mike Zheng Shou, Francesco Locatello, Bernt Schiele, Thomas Brox, Zheng Zhang, Yanwei Fu, Tong He
In this paper, we show that recent advances in video representation learning and pre-trained vision-language models allow for substantial improvements in self-supervised video object localization.
1 code implementation • ICCV 2023 • Zixu Zhao, Jiaze Wang, Max Horn, Yizhuo Ding, Tong He, Zechen Bai, Dominik Zietlow, Carl-Johann Simon-Gabriel, Bing Shuai, Zhuowen Tu, Thomas Brox, Bernt Schiele, Yanwei Fu, Francesco Locatello, Zheng Zhang, Tianjun Xiao
Unsupervised object-centric learning methods allow the partitioning of scenes into entities without additional localization information and are excellent candidates for reducing the annotation burden of multiple-object tracking (MOT) pipelines.
1 code implementation • 18 Jul 2023 • Philipp M. Faller, Leena Chennuru Vankadara, Atalanti A. Mastakouri, Francesco Locatello, Dominik Janzing
In this work, we propose a novel method for falsifying the output of a causal discovery algorithm in the absence of ground truth.
no code implementations • 18 Jul 2023 • Avinash Kori, Francesco Locatello, Fabio De Sousa Ribeiro, Francesca Toni, Ben Glocker
The extraction of modular object-centric representations for downstream tasks is an emerging area of research.
no code implementations • 30 May 2023 • Zhenyu Zhu, Fanghui Liu, Grigorios G Chrysos, Francesco Locatello, Volkan Cevher
This paper focuses on over-parameterized deep neural networks (DNNs) with ReLU activation functions and proves that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification while obtaining (nearly) zero-training error under the lazy training regime.
no code implementations • 20 Apr 2023 • Max F. Burg, Florian Wenzel, Dominik Zietlow, Max Horn, Osama Makansi, Francesco Locatello, Chris Russell
Many approaches have been proposed to use diffusion models to augment training datasets for downstream tasks, such as classification.
no code implementations • NeurIPS 2023 • Marco Fumero, Florian Wenzel, Luca Zancato, Alessandro Achille, Emanuele Rodolà, Stefano Soatto, Bernhard Schölkopf, Francesco Locatello
Recovering the latent factors of variation of high dimensional data has so far focused on simple synthetic settings.
no code implementations • 6 Apr 2023 • Francesco Montagna, Nicoletta Noceti, Lorenzo Rosasco, Kun Zhang, Francesco Locatello
This paper demonstrates how to discover the whole causal graph from the second derivative of the log-likelihood in observational non-linear additive Gaussian noise models.
no code implementations • 6 Apr 2023 • Francesco Montagna, Nicoletta Noceti, Lorenzo Rosasco, Kun Zhang, Francesco Locatello
Causal discovery methods are intrinsically constrained by the set of assumptions needed to ensure structure identifiability.
no code implementations • 4 Apr 2023 • Dong Lao, Zhengyang Hu, Francesco Locatello, Yanchao Yang, Stefano Soatto
We introduce a method to segment the visual field into independently moving regions, trained with no ground truth or supervision.
1 code implementation • 12 Jan 2023 • Yuejiang Liu, Alexandre Alahi, Chris Russell, Max Horn, Dominik Zietlow, Bernhard Schölkopf, Francesco Locatello
Recent years have seen a surge of interest in learning high-level causal representations from low-level image pairs under interventions.
no code implementations • 4 Nov 2022 • Nasim Rahaman, Martin Weiss, Frederik Träuble, Francesco Locatello, Alexandre Lacoste, Yoshua Bengio, Chris Pal, Li Erran Li, Bernhard Schölkopf
Geospatial Information Systems are used by researchers and Humanitarian Assistance and Disaster Response (HADR) practitioners to support a wide variety of important applications.
1 code implementation • 23 Oct 2022 • Jian Yao, Yuxin Hong, Chiyu Wang, Tianjun Xiao, Tong He, Francesco Locatello, David Wipf, Yanwei Fu, Zheng Zhang
The key intuition is that the occluded part of an object can be explained away if that part is visible in other frames, possibly deformed as long as the deformation can be reasonably learned.
no code implementations • 14 Oct 2022 • Nasim Rahaman, Martin Weiss, Francesco Locatello, Chris Pal, Yoshua Bengio, Bernhard Schölkopf, Li Erran Li, Nicolas Ballas
Recent work has seen the development of general purpose neural architectures that can be trained to perform tasks across diverse data modalities.
no code implementations • 30 Sep 2022 • Luca Moschella, Valentino Maiorca, Marco Fumero, Antonio Norelli, Francesco Locatello, Emanuele Rodolà
Neural networks embed the geometric structure of a data manifold lying in a high-dimensional space into latent representations.
3 code implementations • 29 Sep 2022 • Maximilian Seitzer, Max Horn, Andrii Zadaianchuk, Dominik Zietlow, Tianjun Xiao, Carl-Johann Simon-Gabriel, Tong He, Zheng Zhang, Bernhard Schölkopf, Thomas Brox, Francesco Locatello
Humans naturally decompose their environment into entities at the appropriate level of abstraction to act in the world.
no code implementations • 23 Sep 2022 • Samarth Sinha, Peter Gehler, Francesco Locatello, Bernt Schiele
We find that TeST sets the new state-of-the art for test-time domain adaptation algorithms.
1 code implementation • 19 Jul 2022 • Florian Wenzel, Andrea Dittadi, Peter Vincent Gehler, Carl-Johann Simon-Gabriel, Max Horn, Dominik Zietlow, David Kernert, Chris Russell, Thomas Brox, Bernt Schiele, Bernhard Schölkopf, Francesco Locatello
Since out-of-distribution generalization is a generally ill-posed problem, various proxy targets (e. g., calibration, adversarial robustness, algorithmic corruptions, invariance across shifts) were studied across different research programs resulting in different recommendations.
Adversarial Robustness Out-of-Distribution Generalization +1
1 code implementation • 11 Jul 2022 • Andrii Zadaianchuk, Matthaeus Kleindessner, Yi Zhu, Francesco Locatello, Thomas Brox
In this paper, we show that recent advances in self-supervised feature learning enable unsupervised object discovery and semantic segmentation with a performance that matches the state of the field on supervised semantic segmentation 10 years ago.
1 code implementation • 9 Apr 2022 • Michael Lohaus, Matthäus Kleindessner, Krishnaram Kenthapadi, Francesco Locatello, Chris Russell
Based on this observation, we investigate an alternative fairness approach: we add a second classification head to the network to explicitly predict the protected attribute (such as race or gender) alongside the original task.
no code implementations • CVPR 2022 • Dominik Zietlow, Michael Lohaus, Guha Balakrishnan, Matthäus Kleindessner, Francesco Locatello, Bernhard Schölkopf, Chris Russell
Algorithmic fairness is frequently motivated in terms of a trade-off in which overall performance is decreased so as to improve performance on disadvantaged groups where the algorithm would otherwise be less accurate.
no code implementations • 8 Mar 2022 • Paul Rolland, Volkan Cevher, Matthäus Kleindessner, Chris Russel, Bernhard Schölkopf, Dominik Janzing, Francesco Locatello
This paper demonstrates how to recover causal graphs from the score of the data distribution in non-linear additive (Gaussian) noise models.
1 code implementation • 26 Feb 2022 • Gideon Dresdner, Maria-Luiza Vladarean, Gunnar Rätsch, Francesco Locatello, Volkan Cevher, Alp Yurtsever
We propose a stochastic conditional gradient method (CGM) for minimizing convex finite-sum objectives formed as a sum of smooth and non-smooth terms.
no code implementations • 31 Jan 2022 • Davide Mambelli, Frederik Träuble, Stefan Bauer, Bernhard Schölkopf, Francesco Locatello
Although reinforcement learning has seen remarkable progress over the last years, solving robust dexterous object-manipulation tasks in multi-object settings remains a challenge.
1 code implementation • 26 Nov 2021 • Francesco Locatello
The world is structured in countless ways.
1 code implementation • 13 Oct 2021 • Matthias Tangemann, Steffen Schneider, Julius von Kügelgen, Francesco Locatello, Peter Gehler, Thomas Brox, Matthias Kümmerer, Matthias Bethge, Bernhard Schölkopf
Learning generative object models from unlabelled videos is a long standing problem and required for causal scene modeling.
no code implementations • NeurIPS 2021 • Nasim Rahaman, Muhammad Waleed Gondal, Shruti Joshi, Peter Gehler, Yoshua Bengio, Francesco Locatello, Bernhard Schölkopf
Modern neural network architectures can leverage large amounts of data to generalize well within the training distribution.
no code implementations • ICLR 2022 • Osama Makansi, Julius von Kügelgen, Francesco Locatello, Peter Gehler, Dominik Janzing, Thomas Brox, Bernhard Schölkopf
Applying this procedure to state-of-the-art trajectory prediction methods on standard benchmark datasets shows that they are, in fact, unable to reason about interactions.
1 code implementation • ICLR 2022 • Lukas Schott, Julius von Kügelgen, Frederik Träuble, Peter Gehler, Chris Russell, Matthias Bethge, Bernhard Schölkopf, Francesco Locatello, Wieland Brendel
An important component for generalization in machine learning is to uncover underlying latent factors of variation as well as the mechanism through which each factor acts in the world.
no code implementations • ICLR 2022 • Andrea Dittadi, Frederik Träuble, Manuel Wüthrich, Felix Widmaier, Peter Gehler, Ole Winther, Francesco Locatello, Olivier Bachem, Bernhard Schölkopf, Stefan Bauer
By training 240 representations and over 10, 000 reinforcement learning (RL) policies on a simulated robotic setup, we evaluate to what extent different properties of pretrained VAE-based representations affect the OOD generalization of downstream agents.
no code implementations • NeurIPS 2021 • Frederik Träuble, Julius von Kügelgen, Matthäus Kleindessner, Francesco Locatello, Bernhard Schölkopf, Peter Gehler
; and (ii) if the new predictions differ from the current ones, should we update?
1 code implementation • 1 Jul 2021 • Andrea Dittadi, Samuele Papa, Michele De Vita, Bernhard Schölkopf, Ole Winther, Francesco Locatello
The idea behind object-centric representation learning is that natural scenes can better be modeled as compositions of objects and their relations as opposed to distributed representations.
no code implementations • ICML Workshop URL 2021 • Frederik Träuble, Andrea Dittadi, Manuel Wuthrich, Felix Widmaier, Peter Vincent Gehler, Ole Winther, Francesco Locatello, Olivier Bachem, Bernhard Schölkopf, Stefan Bauer
Learning data representations that are useful for various downstream tasks is a cornerstone of artificial intelligence.
Out-of-Distribution Generalization reinforcement-learning +2
1 code implementation • 9 Jun 2021 • Hugo Yèche, Gideon Dresdner, Francesco Locatello, Matthias Hüser, Gunnar Rätsch
Intensive care units (ICU) are increasingly looking towards machine learning for methods to provide online monitoring of critically ill patients.
1 code implementation • NeurIPS 2021 • Julius von Kügelgen, Yash Sharma, Luigi Gresele, Wieland Brendel, Bernhard Schölkopf, Michel Besserve, Francesco Locatello
A common practice is to perform data augmentation via hand-crafted transformations intended to leave the semantics of the data invariant.
Ranked #1 on Image Classification on Causal3DIdent
no code implementations • 19 May 2021 • Gideon Dresdner, Saurav Shekhar, Fabian Pedregosa, Francesco Locatello, Gunnar Rätsch
Variational Inference makes a trade-off between the capacity of the variational family and the tractability of finding an approximate posterior distribution.
no code implementations • 22 Feb 2021 • Bernhard Schölkopf, Francesco Locatello, Stefan Bauer, Nan Rosemary Ke, Nal Kalchbrenner, Anirudh Goyal, Yoshua Bengio
The two fields of machine learning and graphical causality arose and developed separately.
no code implementations • 27 Oct 2020 • Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf, Olivier Bachem
The idea behind the \emph{unsupervised} learning of \emph{disentangled} representations is that real-world data is generated by a few explanatory factors of variation which can be recovered by unsupervised learning algorithms.
no code implementations • ICLR 2021 • Andrea Dittadi, Frederik Träuble, Francesco Locatello, Manuel Wüthrich, Vaibhav Agrawal, Ole Winther, Stefan Bauer, Bernhard Schölkopf
Learning meaningful representations that disentangle the underlying structure of the data generating process is considered to be of key importance in machine learning.
no code implementations • 28 Jul 2020 • Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf, Olivier Bachem
The goal of the unsupervised learning of disentangled representations is to separate the independent explanatory factors of variation in the data without access to supervision.
8 code implementations • NeurIPS 2020 • Francesco Locatello, Dirk Weissenborn, Thomas Unterthiner, Aravindh Mahendran, Georg Heigold, Jakob Uszkoreit, Alexey Dosovitskiy, Thomas Kipf
Learning object-centric representations of complex scenes is a promising step towards enabling efficient abstract reasoning from low-level perceptual features.
2 code implementations • 14 Jun 2020 • Frederik Träuble, Elliot Creager, Niki Kilbertus, Francesco Locatello, Andrea Dittadi, Anirudh Goyal, Bernhard Schölkopf, Stefan Bauer
The focus of disentanglement approaches has been on identifying independent factors of variation in data.
no code implementations • ICLR Workshop LLD 2019 • Francesco Locatello, Michael Tschannen, Stefan Bauer, Gunnar Rätsch, Bernhard Schölkopf, Olivier Bachem
Recently, Locatello et al. (2019) demonstrated that unsupervised disentanglement learning without inductive biases is theoretically impossible and that existing inductive biases and unsupervised methods do not allow to consistently learn disentangled representations.
1 code implementation • ICML 2020 • Geoffrey Négiar, Gideon Dresdner, Alicia Tsai, Laurent El Ghaoui, Francesco Locatello, Robert M. Freund, Fabian Pedregosa
We propose a novel Stochastic Frank-Wolfe (a. k. a.
3 code implementations • ICML 2020 • Francesco Locatello, Ben Poole, Gunnar Rätsch, Bernhard Schölkopf, Olivier Bachem, Michael Tschannen
Third, we perform a large-scale empirical study and show that such pairs of observations are sufficient to reliably learn disentangled representations on several benchmark data sets.
4 code implementations • NeurIPS 2019 • Muhammad Waleed Gondal, Manuel Wüthrich, Đorđe Miladinović, Francesco Locatello, Martin Breidt, Valentin Volchkov, Joel Akpo, Olivier Bachem, Bernhard Schölkopf, Stefan Bauer
Learning meaningful and compact representations with disentangled semantic aspects is considered to be of key importance in representation learning.
no code implementations • NeurIPS 2019 • Francesco Locatello, Gabriele Abbati, Tom Rainforth, Stefan Bauer, Bernhard Schölkopf, Olivier Bachem
Recently there has been a significant interest in learning disentangled representations, as they promise increased interpretability, generalization to unseen scenarios and faster learning on downstream tasks.
no code implementations • NeurIPS 2019 • Sjoerd van Steenkiste, Francesco Locatello, Jürgen Schmidhuber, Olivier Bachem
A disentangled representation encodes information about the salient factors of variation in the data independently.
no code implementations • 16 May 2019 • Luigi Gresele, Paul K. Rubenstein, Arash Mehrjou, Francesco Locatello, Bernhard Schölkopf
In contrast to known identifiability results for nonlinear ICA, we prove that independent latent sources with arbitrary mixing can be recovered as long as multiple, sufficiently different noisy views are available.
no code implementations • 3 May 2019 • Francesco Locatello, Michael Tschannen, Stefan Bauer, Gunnar Rätsch, Bernhard Schölkopf, Olivier Bachem
Recently, Locatello et al. (2019) demonstrated that unsupervised disentanglement learning without inductive biases is theoretically impossible and that existing inductive biases and unsupervised methods do not allow to consistently learn disentangled representations.
1 code implementation • NeurIPS 2019 • Francesco Locatello, Alp Yurtsever, Olivier Fercoq, Volkan Cevher
A broad class of convex optimization problems can be formulated as a semidefinite program (SDP), minimization of a convex function over the positive-semidefinite cone subject to some affine constraints.
8 code implementations • ICML 2019 • Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf, Olivier Bachem
The key idea behind the unsupervised learning of disentangled representations is that real-world data is generated by a few explanatory factors of variation which can be recovered by unsupervised learning algorithms.
6 code implementations • ICLR 2019 • Vincent Fortuin, Matthias Hüser, Francesco Locatello, Heiko Strathmann, Gunnar Rätsch
We evaluate our model in terms of clustering performance and interpretability on static (Fashion-)MNIST data, a time series of linearly interpolated (Fashion-)MNIST images, a chaotic Lorenz attractor system with two macro states, as well as on a challenging real world medical time series application on the eICU data set.
1 code implementation • NeurIPS 2018 • Francesco Locatello, Gideon Dresdner, Rajiv Khanna, Isabel Valera, Gunnar Rätsch
Finally, we present a stopping criterion drawn from the duality gap in the classic FW analyses and exhaustive experiments to illustrate the usefulness of our theoretical and algorithmic contributions.
no code implementations • 30 Apr 2018 • Francesco Locatello, Damien Vincent, Ilya Tolstikhin, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf
A common assumption in causal modeling posits that the data is generated by a set of independent mechanisms, and algorithms should aim to recover this structure.
no code implementations • ICML 2018 • Francesco Locatello, Anant Raj, Sai Praneeth Karimireddy, Gunnar Rätsch, Bernhard Schölkopf, Sebastian U. Stich, Martin Jaggi
Exploiting the connection between the two algorithms, we present a unified analysis of both, providing affine invariant sublinear $\mathcal{O}(1/t)$ rates on smooth objectives and linear convergence on strongly convex objectives.
no code implementations • 5 Aug 2017 • Francesco Locatello, Rajiv Khanna, Joydeep Ghosh, Gunnar Rätsch
Variational inference is a popular technique to approximate a possibly intractable Bayesian posterior with a more tractable one.
no code implementations • NeurIPS 2017 • Francesco Locatello, Michael Tschannen, Gunnar Rätsch, Martin Jaggi
Greedy optimization methods such as Matching Pursuit (MP) and Frank-Wolfe (FW) algorithms regained popularity in recent years due to their simplicity, effectiveness and theoretical guarantees.
no code implementations • 21 Feb 2017 • Francesco Locatello, Rajiv Khanna, Michael Tschannen, Martin Jaggi
Two of the most fundamental prototypes of greedy optimization are the matching pursuit and Frank-Wolfe algorithms.