Search Results for author: Francesco Locatello

Found 67 papers, 28 papers with code

Two Tricks to Improve Unsupervised Segmentation Learning

1 code implementation4 Apr 2024 Alp Eren Sari, Francesco Locatello, Paolo Favaro

We present two practical improvement techniques for unsupervised segmentation learning.

Segmentation

Mechanistic Neural Networks for Scientific Machine Learning

1 code implementation20 Feb 2024 Adeel Pervez, Francesco Locatello, Efstratios Gavves

This paper presents Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.

Unsupervised Concept Discovery Mitigates Spurious Correlations

no code implementations20 Feb 2024 Md Rifat Arefin, Yan Zhang, Aristide Baratin, Francesco Locatello, Irina Rish, Dianbo Liu, Kenji Kawaguchi

Models prone to spurious correlations in training data often produce brittle predictions and introduce unintended biases.

Representation Learning

Binding Dynamics in Rotating Features

no code implementations8 Feb 2024 Sindy Löwe, Francesco Locatello, Max Welling

In human cognition, the binding problem describes the open question of how the brain flexibly integrates diverse information into cohesive object representations.

Object

Multi-View Causal Representation Learning with Partial Observability

1 code implementation7 Nov 2023 Dingling Yao, Danru Xu, Sébastien Lachapelle, Sara Magliacane, Perouz Taslakian, Georg Martius, Julius von Kügelgen, Francesco Locatello

We present a unified framework for studying the identifiability of representations learned from simultaneously observed views, such as different data modalities.

Contrastive Learning Disentanglement

Latent Space Translation via Semantic Alignment

1 code implementation NeurIPS 2023 Valentino Maiorca, Luca Moschella, Antonio Norelli, Marco Fumero, Francesco Locatello, Emanuele Rodolà

While different neural models often exhibit latent spaces that are alike when exposed to semantically related data, this intrinsic similarity is not always immediately discernible.

Translation

Shortcuts for causal discovery of nonlinear models by score matching

no code implementations22 Oct 2023 Francesco Montagna, Nicoletta Noceti, Lorenzo Rosasco, Francesco Locatello

The use of simulated data in the field of causal discovery is ubiquitous due to the scarcity of annotated real data.

Causal Discovery

Unsupervised Open-Vocabulary Object Localization in Videos

no code implementations ICCV 2023 Ke Fan, Zechen Bai, Tianjun Xiao, Dominik Zietlow, Max Horn, Zixu Zhao, Carl-Johann Simon-Gabriel, Mike Zheng Shou, Francesco Locatello, Bernt Schiele, Thomas Brox, Zheng Zhang, Yanwei Fu, Tong He

In this paper, we show that recent advances in video representation learning and pre-trained vision-language models allow for substantial improvements in self-supervised video object localization.

Object Object Localization +1

Object-Centric Multiple Object Tracking

1 code implementation ICCV 2023 Zixu Zhao, Jiaze Wang, Max Horn, Yizhuo Ding, Tong He, Zechen Bai, Dominik Zietlow, Carl-Johann Simon-Gabriel, Bing Shuai, Zhuowen Tu, Thomas Brox, Bernt Schiele, Yanwei Fu, Francesco Locatello, Zheng Zhang, Tianjun Xiao

Unsupervised object-centric learning methods allow the partitioning of scenes into entities without additional localization information and are excellent candidates for reducing the annotation burden of multiple-object tracking (MOT) pipelines.

Multiple Object Tracking Object +3

Self-Compatibility: Evaluating Causal Discovery without Ground Truth

1 code implementation18 Jul 2023 Philipp M. Faller, Leena Chennuru Vankadara, Atalanti A. Mastakouri, Francesco Locatello, Dominik Janzing

In this work, we propose a novel method for falsifying the output of a causal discovery algorithm in the absence of ground truth.

Causal Discovery Model Selection

Grounded Object Centric Learning

no code implementations18 Jul 2023 Avinash Kori, Francesco Locatello, Fabio De Sousa Ribeiro, Francesca Toni, Ben Glocker

The extraction of modular object-centric representations for downstream tasks is an emerging area of research.

Object Object Discovery +3

Benign Overfitting in Deep Neural Networks under Lazy Training

no code implementations30 May 2023 Zhenyu Zhu, Fanghui Liu, Grigorios G Chrysos, Francesco Locatello, Volkan Cevher

This paper focuses on over-parameterized deep neural networks (DNNs) with ReLU activation functions and proves that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification while obtaining (nearly) zero-training error under the lazy training regime.

Learning Theory

Image retrieval outperforms diffusion models on data augmentation

no code implementations20 Apr 2023 Max F. Burg, Florian Wenzel, Dominik Zietlow, Max Horn, Osama Makansi, Francesco Locatello, Chris Russell

Many approaches have been proposed to use diffusion models to augment training datasets for downstream tasks, such as classification.

Data Augmentation Image Retrieval +2

Scalable Causal Discovery with Score Matching

no code implementations6 Apr 2023 Francesco Montagna, Nicoletta Noceti, Lorenzo Rosasco, Kun Zhang, Francesco Locatello

This paper demonstrates how to discover the whole causal graph from the second derivative of the log-likelihood in observational non-linear additive Gaussian noise models.

Causal Discovery

Causal Discovery with Score Matching on Additive Models with Arbitrary Noise

no code implementations6 Apr 2023 Francesco Montagna, Nicoletta Noceti, Lorenzo Rosasco, Kun Zhang, Francesco Locatello

Causal discovery methods are intrinsically constrained by the set of assumptions needed to ensure structure identifiability.

Additive models Causal Discovery

A General Purpose Neural Architecture for Geospatial Systems

no code implementations4 Nov 2022 Nasim Rahaman, Martin Weiss, Frederik Träuble, Francesco Locatello, Alexandre Lacoste, Yoshua Bengio, Chris Pal, Li Erran Li, Bernhard Schölkopf

Geospatial Information Systems are used by researchers and Humanitarian Assistance and Disaster Response (HADR) practitioners to support a wide variety of important applications.

Disaster Response Earth Observation +2

Self-supervised Amodal Video Object Segmentation

1 code implementation23 Oct 2022 Jian Yao, Yuxin Hong, Chiyu Wang, Tianjun Xiao, Tong He, Francesco Locatello, David Wipf, Yanwei Fu, Zheng Zhang

The key intuition is that the occluded part of an object can be explained away if that part is visible in other frames, possibly deformed as long as the deformation can be reasonably learned.

Object Segmentation +6

Neural Attentive Circuits

no code implementations14 Oct 2022 Nasim Rahaman, Martin Weiss, Francesco Locatello, Chris Pal, Yoshua Bengio, Bernhard Schölkopf, Li Erran Li, Nicolas Ballas

Recent work has seen the development of general purpose neural architectures that can be trained to perform tasks across diverse data modalities.

Point Cloud Classification text-classification +1

Relative representations enable zero-shot latent space communication

no code implementations30 Sep 2022 Luca Moschella, Valentino Maiorca, Marco Fumero, Antonio Norelli, Francesco Locatello, Emanuele Rodolà

Neural networks embed the geometric structure of a data manifold lying in a high-dimensional space into latent representations.

Assaying Out-Of-Distribution Generalization in Transfer Learning

1 code implementation19 Jul 2022 Florian Wenzel, Andrea Dittadi, Peter Vincent Gehler, Carl-Johann Simon-Gabriel, Max Horn, Dominik Zietlow, David Kernert, Chris Russell, Thomas Brox, Bernt Schiele, Bernhard Schölkopf, Francesco Locatello

Since out-of-distribution generalization is a generally ill-posed problem, various proxy targets (e. g., calibration, adversarial robustness, algorithmic corruptions, invariance across shifts) were studied across different research programs resulting in different recommendations.

Adversarial Robustness Out-of-Distribution Generalization +1

Unsupervised Semantic Segmentation with Self-supervised Object-centric Representations

1 code implementation11 Jul 2022 Andrii Zadaianchuk, Matthaeus Kleindessner, Yi Zhu, Francesco Locatello, Thomas Brox

In this paper, we show that recent advances in self-supervised feature learning enable unsupervised object discovery and semantic segmentation with a performance that matches the state of the field on supervised semantic segmentation 10 years ago.

Clustering Object +3

Are Two Heads the Same as One? Identifying Disparate Treatment in Fair Neural Networks

1 code implementation9 Apr 2022 Michael Lohaus, Matthäus Kleindessner, Krishnaram Kenthapadi, Francesco Locatello, Chris Russell

Based on this observation, we investigate an alternative fairness approach: we add a second classification head to the network to explicitly predict the protected attribute (such as race or gender) alongside the original task.

Attribute Fairness

Leveling Down in Computer Vision: Pareto Inefficiencies in Fair Deep Classifiers

no code implementations CVPR 2022 Dominik Zietlow, Michael Lohaus, Guha Balakrishnan, Matthäus Kleindessner, Francesco Locatello, Bernhard Schölkopf, Chris Russell

Algorithmic fairness is frequently motivated in terms of a trade-off in which overall performance is decreased so as to improve performance on disadvantaged groups where the algorithm would otherwise be less accurate.

Fairness

Score matching enables causal discovery of nonlinear additive noise models

no code implementations8 Mar 2022 Paul Rolland, Volkan Cevher, Matthäus Kleindessner, Chris Russel, Bernhard Schölkopf, Dominik Janzing, Francesco Locatello

This paper demonstrates how to recover causal graphs from the score of the data distribution in non-linear additive (Gaussian) noise models.

Causal Discovery

Faster One-Sample Stochastic Conditional Gradient Method for Composite Convex Minimization

1 code implementation26 Feb 2022 Gideon Dresdner, Maria-Luiza Vladarean, Gunnar Rätsch, Francesco Locatello, Volkan Cevher, Alp Yurtsever

We propose a stochastic conditional gradient method (CGM) for minimizing convex finite-sum objectives formed as a sum of smooth and non-smooth terms.

Clustering Matrix Completion

Compositional Multi-Object Reinforcement Learning with Linear Relation Networks

no code implementations31 Jan 2022 Davide Mambelli, Frederik Träuble, Stefan Bauer, Bernhard Schölkopf, Francesco Locatello

Although reinforcement learning has seen remarkable progress over the last years, solving robust dexterous object-manipulation tasks in multi-object settings remains a challenge.

Object reinforcement-learning +2

You Mostly Walk Alone: Analyzing Feature Attribution in Trajectory Prediction

no code implementations ICLR 2022 Osama Makansi, Julius von Kügelgen, Francesco Locatello, Peter Gehler, Dominik Janzing, Thomas Brox, Bernhard Schölkopf

Applying this procedure to state-of-the-art trajectory prediction methods on standard benchmark datasets shows that they are, in fact, unable to reason about interactions.

Attribute Trajectory Prediction

Visual Representation Learning Does Not Generalize Strongly Within the Same Domain

1 code implementation ICLR 2022 Lukas Schott, Julius von Kügelgen, Frederik Träuble, Peter Gehler, Chris Russell, Matthias Bethge, Bernhard Schölkopf, Francesco Locatello, Wieland Brendel

An important component for generalization in machine learning is to uncover underlying latent factors of variation as well as the mechanism through which each factor acts in the world.

Representation Learning

The Role of Pretrained Representations for the OOD Generalization of Reinforcement Learning Agents

no code implementations ICLR 2022 Andrea Dittadi, Frederik Träuble, Manuel Wüthrich, Felix Widmaier, Peter Gehler, Ole Winther, Francesco Locatello, Olivier Bachem, Bernhard Schölkopf, Stefan Bauer

By training 240 representations and over 10, 000 reinforcement learning (RL) policies on a simulated robotic setup, we evaluate to what extent different properties of pretrained VAE-based representations affect the OOD generalization of downstream agents.

Reinforcement Learning (RL) Representation Learning

Generalization and Robustness Implications in Object-Centric Learning

1 code implementation1 Jul 2021 Andrea Dittadi, Samuele Papa, Michele De Vita, Bernhard Schölkopf, Ole Winther, Francesco Locatello

The idea behind object-centric representation learning is that natural scenes can better be modeled as compositions of objects and their relations as opposed to distributed representations.

Inductive Bias Object +3

Neighborhood Contrastive Learning Applied to Online Patient Monitoring

1 code implementation9 Jun 2021 Hugo Yèche, Gideon Dresdner, Francesco Locatello, Matthias Hüser, Gunnar Rätsch

Intensive care units (ICU) are increasingly looking towards machine learning for methods to provide online monitoring of critically ill patients.

BIG-bench Machine Learning Contrastive Learning +3

Boosting Variational Inference With Locally Adaptive Step-Sizes

no code implementations19 May 2021 Gideon Dresdner, Saurav Shekhar, Fabian Pedregosa, Francesco Locatello, Gunnar Rätsch

Variational Inference makes a trade-off between the capacity of the variational family and the tractability of finding an approximate posterior distribution.

Variational Inference

A Sober Look at the Unsupervised Learning of Disentangled Representations and their Evaluation

no code implementations27 Oct 2020 Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf, Olivier Bachem

The idea behind the \emph{unsupervised} learning of \emph{disentangled} representations is that real-world data is generated by a few explanatory factors of variation which can be recovered by unsupervised learning algorithms.

Disentanglement

On the Transfer of Disentangled Representations in Realistic Settings

no code implementations ICLR 2021 Andrea Dittadi, Frederik Träuble, Francesco Locatello, Manuel Wüthrich, Vaibhav Agrawal, Ole Winther, Stefan Bauer, Bernhard Schölkopf

Learning meaningful representations that disentangle the underlying structure of the data generating process is considered to be of key importance in machine learning.

Disentanglement

A Commentary on the Unsupervised Learning of Disentangled Representations

no code implementations28 Jul 2020 Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf, Olivier Bachem

The goal of the unsupervised learning of disentangled representations is to separate the independent explanatory factors of variation in the data without access to supervision.

Object-Centric Learning with Slot Attention

8 code implementations NeurIPS 2020 Francesco Locatello, Dirk Weissenborn, Thomas Unterthiner, Aravindh Mahendran, Georg Heigold, Jakob Uszkoreit, Alexey Dosovitskiy, Thomas Kipf

Learning object-centric representations of complex scenes is a promising step towards enabling efficient abstract reasoning from low-level perceptual features.

Object Object Discovery +1

Disentangling Factors of Variations Using Few Labels

no code implementations ICLR Workshop LLD 2019 Francesco Locatello, Michael Tschannen, Stefan Bauer, Gunnar Rätsch, Bernhard Schölkopf, Olivier Bachem

Recently, Locatello et al. (2019) demonstrated that unsupervised disentanglement learning without inductive biases is theoretically impossible and that existing inductive biases and unsupervised methods do not allow to consistently learn disentangled representations.

Disentanglement Model Selection

Weakly-Supervised Disentanglement Without Compromises

3 code implementations ICML 2020 Francesco Locatello, Ben Poole, Gunnar Rätsch, Bernhard Schölkopf, Olivier Bachem, Michael Tschannen

Third, we perform a large-scale empirical study and show that such pairs of observations are sufficient to reliably learn disentangled representations on several benchmark data sets.

Disentanglement Fairness

On the Fairness of Disentangled Representations

no code implementations NeurIPS 2019 Francesco Locatello, Gabriele Abbati, Tom Rainforth, Stefan Bauer, Bernhard Schölkopf, Olivier Bachem

Recently there has been a significant interest in learning disentangled representations, as they promise increased interpretability, generalization to unseen scenarios and faster learning on downstream tasks.

Disentanglement Fairness

The Incomplete Rosetta Stone Problem: Identifiability Results for Multi-View Nonlinear ICA

no code implementations16 May 2019 Luigi Gresele, Paul K. Rubenstein, Arash Mehrjou, Francesco Locatello, Bernhard Schölkopf

In contrast to known identifiability results for nonlinear ICA, we prove that independent latent sources with arbitrary mixing can be recovered as long as multiple, sufficiently different noisy views are available.

Disentangling Factors of Variation Using Few Labels

no code implementations3 May 2019 Francesco Locatello, Michael Tschannen, Stefan Bauer, Gunnar Rätsch, Bernhard Schölkopf, Olivier Bachem

Recently, Locatello et al. (2019) demonstrated that unsupervised disentanglement learning without inductive biases is theoretically impossible and that existing inductive biases and unsupervised methods do not allow to consistently learn disentangled representations.

Disentanglement Model Selection

Stochastic Frank-Wolfe for Composite Convex Minimization

1 code implementation NeurIPS 2019 Francesco Locatello, Alp Yurtsever, Olivier Fercoq, Volkan Cevher

A broad class of convex optimization problems can be formulated as a semidefinite program (SDP), minimization of a convex function over the positive-semidefinite cone subject to some affine constraints.

Stochastic Optimization

Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations

8 code implementations ICML 2019 Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf, Olivier Bachem

The key idea behind the unsupervised learning of disentangled representations is that real-world data is generated by a few explanatory factors of variation which can be recovered by unsupervised learning algorithms.

Disentanglement

SOM-VAE: Interpretable Discrete Representation Learning on Time Series

6 code implementations ICLR 2019 Vincent Fortuin, Matthias Hüser, Francesco Locatello, Heiko Strathmann, Gunnar Rätsch

We evaluate our model in terms of clustering performance and interpretability on static (Fashion-)MNIST data, a time series of linearly interpolated (Fashion-)MNIST images, a chaotic Lorenz attractor system with two macro states, as well as on a challenging real world medical time series application on the eICU data set.

Clustering Dimensionality Reduction +3

Boosting Black Box Variational Inference

1 code implementation NeurIPS 2018 Francesco Locatello, Gideon Dresdner, Rajiv Khanna, Isabel Valera, Gunnar Rätsch

Finally, we present a stopping criterion drawn from the duality gap in the classic FW analyses and exhaustive experiments to illustrate the usefulness of our theoretical and algorithmic contributions.

Variational Inference

Competitive Training of Mixtures of Independent Deep Generative Models

no code implementations30 Apr 2018 Francesco Locatello, Damien Vincent, Ilya Tolstikhin, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf

A common assumption in causal modeling posits that the data is generated by a set of independent mechanisms, and algorithms should aim to recover this structure.

Clustering

On Matching Pursuit and Coordinate Descent

no code implementations ICML 2018 Francesco Locatello, Anant Raj, Sai Praneeth Karimireddy, Gunnar Rätsch, Bernhard Schölkopf, Sebastian U. Stich, Martin Jaggi

Exploiting the connection between the two algorithms, we present a unified analysis of both, providing affine invariant sublinear $\mathcal{O}(1/t)$ rates on smooth objectives and linear convergence on strongly convex objectives.

Boosting Variational Inference: an Optimization Perspective

no code implementations5 Aug 2017 Francesco Locatello, Rajiv Khanna, Joydeep Ghosh, Gunnar Rätsch

Variational inference is a popular technique to approximate a possibly intractable Bayesian posterior with a more tractable one.

Variational Inference

Greedy Algorithms for Cone Constrained Optimization with Convergence Guarantees

no code implementations NeurIPS 2017 Francesco Locatello, Michael Tschannen, Gunnar Rätsch, Martin Jaggi

Greedy optimization methods such as Matching Pursuit (MP) and Frank-Wolfe (FW) algorithms regained popularity in recent years due to their simplicity, effectiveness and theoretical guarantees.

A Unified Optimization View on Generalized Matching Pursuit and Frank-Wolfe

no code implementations21 Feb 2017 Francesco Locatello, Rajiv Khanna, Michael Tschannen, Martin Jaggi

Two of the most fundamental prototypes of greedy optimization are the matching pursuit and Frank-Wolfe algorithms.

Cannot find the paper you are looking for? You can Submit a new open access paper.