no code implementations • 4 Oct 2018 • Orlando Romero, Sarthak Chatterjee, Sérgio Pequito
Furthermore, we propose to assess its convergence as asymptotic stability in the sense of Lyapunov.
no code implementations • 3 Mar 2019 • Sarthak Chatterjee, Orlando Romero, Sérgio Pequito
The Expectation-Maximization (EM) algorithm is one of the most popular methods used to solve the problem of parametric distribution-based clustering in unsupervised learning.
no code implementations • 18 Dec 2019 • Orlando Romero, Mouhacine Benosman
In this paper, we propose two discontinuous dynamical systems in continuous time with guaranteed prescribed finite-time local convergence to strict local minima of a given cost function.
no code implementations • 23 Jun 2020 • Orlando Romero, Subhro Das, Pin-Yu Chen, Sérgio Pequito
Out of the recent advances in systems and control (S\&C)-based analysis of optimization algorithms, not enough work has been specifically dedicated to machine learning (ML) algorithms and its applications.
no code implementations • ICML 2020 • Orlando Romero, Mouhacine Benosman
In this paper, we investigate a Lyapunov-like differential inequality that allows us to establish finite-time stability of a continuous-time state-space dynamical system represented via a multivariate ordinary differential equation or differential inclusion.
no code implementations • 1 Jan 2021 • Mouhacine Benosman, Orlando Romero, Anoop Cherian
In this paper, we investigate in the context of deep neural networks, the performance of several discretization algorithms for two first-order finite-time optimization flows.
no code implementations • 6 Oct 2020 • Siqi Zhang, Mouhacine Benosman, Orlando Romero, Anoop Cherian
In this paper, we investigate the performance of two first-order optimization algorithms, obtained from forward Euler discretization of finite-time optimization flows.