no code implementations • 3 Feb 2024 • Adrien Banse, Jan Kreischer, Xavier Oliva i Jürgens
Federated learning (FL), as a type of distributed machine learning, is capable of significantly preserving client's private data from being shared among different parties.
no code implementations • 30 Mar 2023 • Adrien Banse, Licio Romao, Alessandro Abate, Raphaël M. Jungers
In order to learn the optimal structure, we define a Kantorovich-inspired metric between Markov chains, and we use it as a loss function.
no code implementations • 10 Feb 2023 • Adrien Banse, Zheming Wang, Raphaël M. Jungers
We present a data-driven framework based on Lyapunov theory to provide stability guarantees for a family of hybrid systems.
no code implementations • 4 Dec 2022 • Adrien Banse, Licio Romao, Alessandro Abate, Raphaël M. Jungers
We propose a sample-based, sequential method to abstract a (potentially black-box) dynamical system with a sequence of memory-dependent Markov chains of increasing size.
no code implementations • 2 May 2022 • Adrien Banse, Zheming Wang, Raphaël M. Jungers
More precisely, our contribution is the following: we derive a probabilistic upper bound on the CJSR of an unknown CSLS from a finite number of observations.
no code implementations • 2 May 2022 • Adrien Banse, Zheming Wang, Raphaël M. Jungers
By generalizing previous results on arbitrary switching linear systems, we show that, by sampling a finite number of observations, we are able to construct an approximate Lyapunov function for the underlying system.