no code implementations • 15 Jun 2023 • Winfried Lohmiller, Philipp Gassert, Jean-Jacques Slotine
Global exponential convergence of the algorithm is established using Contraction Theory with Inequality Constraints, which is extended from the continuous to the discrete case in this paper: The parametrization of each linear function piece is, in contrast to deep learning, linear in the proposed MinMax network.
no code implementations • 25 Apr 2018 • Winfried Lohmiller, Philipp Gassert, Jean-Jacques Slotine
We discuss technical results on learning function approximations using piecewise-linear basis functions, and analyze their stability and convergence using nonlinear contraction theory.