1 code implementation • 5 Mar 2024 • Edoardo Caldarelli, Antoine Chatalic, Adrià Colomé, Cesare Molinari, Carlos Ocampo-Martinez, Carme Torras, Lorenzo Rosasco
In this paper, we study how the Koopman operator framework can be combined with kernel methods to effectively control nonlinear dynamical systems.
1 code implementation • 22 Nov 2023 • Antoine Chatalic, Nicolas Schreuder, Ernesto de Vito, Lorenzo Rosasco
In this work we consider the problem of numerical integration, i. e., approximating integrals with respect to a target probability measure using only pointwise evaluations of the integrand.
1 code implementation • NeurIPS 2023 • Giacomo Meanti, Antoine Chatalic, Vladimir R. Kostic, Pietro Novelli, Massimiliano Pontil, Lorenzo Rosasco
Our empirical and theoretical analysis shows that the proposed estimators provide a sound and efficient way to learn large scale dynamical systems.
no code implementations • 25 Nov 2022 • Florimond Houssiau, Vincent Schellekens, Antoine Chatalic, Shreyas Kumar Annamraju, Yves-Alexandre de Montjoye
In this paper, we introduce the generic moment-to-moment (M$^2$M) method to perform a wide range of data exploration tasks from a single private sketch.
no code implementations • 31 Jan 2022 • Antoine Chatalic, Nicolas Schreuder, Alessandro Rudi, Lorenzo Rosasco
Our main result is an upper bound on the approximation error of this procedure.
1 code implementation • 21 Oct 2021 • Antoine Chatalic, Luigi Carratino, Ernesto de Vito, Lorenzo Rosasco
Compressive learning is an approach to efficient large scale learning based on sketching an entire dataset to a single mean embedding (the sketch), i. e. a vector of generalized moments.
no code implementations • 4 Aug 2020 • Rémi Gribonval, Antoine Chatalic, Nicolas Keriven, Vincent Schellekens, Laurent Jacques, Philip Schniter
This article considers "compressive learning," an approach to large-scale machine learning where datasets are massively compressed before learning (e. g., clustering, classification, or regression) is performed.