no code implementations • 21 Jun 2023 • L. Storm, H. Linander, J. Bec, K. Gustavsson, B. Mehlig
We compute how small input perturbations affect the output of deep neural networks, exploring an analogy between deep networks and dynamical systems, where the growth or decay of local perturbations is characterised by finite-time Lyapunov exponents.
no code implementations • 3 Jun 2022 • L. Storm, K. Gustavsson, B. Mehlig
By selecting network parameters such that the dynamics of the network is contractive, characterized by a negative maximal Lyapunov exponent, the network may synchronize with the driving signal.
no code implementations • 15 Nov 2017 • K. Gustavsson, L. Biferale, A. Celani, S. Colabrese
We apply a reinforcement learning algorithm to show how smart particles can learn approximately optimal strategies to navigate in complex flows.