no code implementations • 10 Apr 2024 • Meyer Scetbon, Joel Jennings, Agrin Hilmkil, Cheng Zhang, Chao Ma
Based on this, we design a two-stage causal generative model that first infers the causal order from observations in a zero-shot manner, thus by-passing the search, and then learns the generative fixed-point SCM on the ordered variables.
no code implementations • 6 Feb 2024 • Tarun Gupta, Wenbo Gong, Chao Ma, Nick Pawlowski, Agrin Hilmkil, Meyer Scetbon, Ade Famoti, Ashley Juan Llorens, Jianfeng Gao, Stefan Bauer, Danica Kragic, Bernhard Schölkopf, Cheng Zhang
This paper focuses on the prospects of building foundation world models for the upcoming generation of embodied agents and presents a novel viewpoint on the significance of causality within these.
no code implementations • 1 Aug 2023 • Elvis Dohmatob, Meyer Scetbon
In this paper, we investigate the impact of test-time adversarial attacks on linear regression models and determine the optimal level of robustness that any model can reach while maintaining a given level of standard predictive performance (accuracy).
no code implementations • 26 Apr 2023 • Meyer Scetbon
In this note, we propose polynomial-time algorithms solving the Monge and Kantorovich formulations of the $\infty$-optimal transport problem in the discrete and finite setting.
no code implementations • 31 Jan 2023 • Meyer Scetbon, Elvis Dohmatob
However, we show that this strategy can be arbitrarily sub-optimal in the case of general Mahalanobis attacks.
no code implementations • 24 May 2022 • Meyer Scetbon, Marco Cuturi
The matching principles behind optimal transport (OT) play an increasingly important role in machine learning, a trend which can be observed when OT is used to disambiguate datasets in applications (e. g. single-cell genomics) or used to improve more complex methods (e. g. balanced attention in transformers or self-supervised learning).
no code implementations • 31 Dec 2021 • Nicholas J. Irons, Meyer Scetbon, Soumik Pal, Zaid Harchaoui
Triangular flows, also known as Kn\"{o}the-Rosenblatt measure couplings, comprise an important building block of normalizing flow models for generative modeling and density estimation, including popular autoregressive flow models such as real-valued non-volume preserving transformation models (Real NVP).
1 code implementation • 28 Oct 2021 • Meyer Scetbon, Laurent Meunier, Yaniv Romano
We propose a new conditional dependence measure and a statistical test for conditional independence.
1 code implementation • NeurIPS 2021 • Meyer Scetbon, Gabriel Peyré, Marco Cuturi
The ability to align points across two related yet incomparable point clouds (e. g. living in different spaces) plays an important role in machine learning.
1 code implementation • 8 Mar 2021 • Meyer Scetbon, Marco Cuturi, Gabriel Peyré
Because matrix-vector products are pervasive in the Sinkhorn algorithm, several works have proposed to \textit{approximate} kernel matrices appearing in its iterations using low-rank factors.
no code implementations • 13 Feb 2021 • Laurent Meunier, Meyer Scetbon, Rafael Pinot, Jamal Atif, Yann Chevaleyre
This paper tackles the problem of adversarial examples from a game theoretic point of view.
no code implementations • 12 Jun 2020 • Meyer Scetbon, Laurent Meunier, Jamal Atif, Marco Cuturi
When there is only one agent, we recover the Optimal Transport problem.
1 code implementation • NeurIPS 2020 • Meyer Scetbon, Marco Cuturi
Although Sinkhorn divergences are now routinely used in data sciences to compare probability distributions, the computational effort required to compute them remains expensive, growing in general quadratically in the size $n$ of the support of these distributions.
no code implementations • ICML 2020 • Meyer Scetbon, Zaid Harchaoui
We present a description of the function space and the smoothness class associated with a convolutional network using the machinery of reproducing kernel Hilbert spaces.
no code implementations • 28 Feb 2020 • Meyer Scetbon, Zaid Harchaoui
We present eigenvalue decay estimates of integral operators associated with compositional dot-product kernels.
1 code implementation • NeurIPS 2019 • Meyer Scetbon, Gael Varoquaux
Here, we show that $L^p$ distances (with $p\geq 1$) between these distribution representatives give metrics on the space of distributions that are well-behaved to detect differences between distributions as they metrize the weak convergence.
no code implementations • 28 Sep 2019 • Meyer Scetbon, Michael Elad, Peyman Milanfar
The question we address in this paper is whether K-SVD was brought to its peak in its original conception, or whether it can be made competitive again.