Search Results for author: Antoine Chatalic

Found 7 papers, 4 papers with code

Linear quadratic control of nonlinear systems with Koopman operator learning and the Nyström method

1 code implementation5 Mar 2024 Edoardo Caldarelli, Antoine Chatalic, Adrià Colomé, Cesare Molinari, Carlos Ocampo-Martinez, Carme Torras, Lorenzo Rosasco

In this paper, we study how the Koopman operator framework can be combined with kernel methods to effectively control nonlinear dynamical systems.

Operator learning

Efficient Numerical Integration in Reproducing Kernel Hilbert Spaces via Leverage Scores Sampling

1 code implementation22 Nov 2023 Antoine Chatalic, Nicolas Schreuder, Ernesto de Vito, Lorenzo Rosasco

In this work we consider the problem of numerical integration, i. e., approximating integrals with respect to a target probability measure using only pointwise evaluations of the integrand.

Numerical Integration

M$^2$M: A general method to perform various data analysis tasks from a differentially private sketch

no code implementations25 Nov 2022 Florimond Houssiau, Vincent Schellekens, Antoine Chatalic, Shreyas Kumar Annamraju, Yves-Alexandre de Montjoye

In this paper, we introduce the generic moment-to-moment (M$^2$M) method to perform a wide range of data exploration tasks from a single private sketch.

Nyström Kernel Mean Embeddings

no code implementations31 Jan 2022 Antoine Chatalic, Nicolas Schreuder, Alessandro Rudi, Lorenzo Rosasco

Our main result is an upper bound on the approximation error of this procedure.

Mean Nyström Embeddings for Adaptive Compressive Learning

1 code implementation21 Oct 2021 Antoine Chatalic, Luigi Carratino, Ernesto de Vito, Lorenzo Rosasco

Compressive learning is an approach to efficient large scale learning based on sketching an entire dataset to a single mean embedding (the sketch), i. e. a vector of generalized moments.

Sketching Datasets for Large-Scale Learning (long version)

no code implementations4 Aug 2020 Rémi Gribonval, Antoine Chatalic, Nicolas Keriven, Vincent Schellekens, Laurent Jacques, Philip Schniter

This article considers "compressive learning," an approach to large-scale machine learning where datasets are massively compressed before learning (e. g., clustering, classification, or regression) is performed.

BIG-bench Machine Learning Clustering +1

Cannot find the paper you are looking for? You can Submit a new open access paper.