no code implementations • 27 Jun 2022 • Anton Johansson, Claes Strannegård, Niklas Engsner, Petter Mostad
We pursue a line of research that seeks to regularize the spectral norm of the Jacobian of the input-output mapping for deep neural networks.
no code implementations • 17 Aug 2021 • Claes Strannegård, Niklas Engsner, Pietro Ferrari, Hans Glimmerfors, Marcus Hilding Södergren, Tobias Karlsson, Birger Kleve, Victor Skoglund
Animal cognition is modeled by integrating three separate networks: (i) a reflex network for hard-wired reflexes; (ii) a happiness network that maps sensory data such as oxygen, water, energy, and smells, to a scalar happiness value; and (iii) a policy network for selecting actions.
1 code implementation • 3 Jul 2021 • Anton Johansson, Niklas Engsner, Claes Strannegård, Petter Mostad
From a statistical point of view, fitting a neural network may be seen as a kind of regression, where we seek a function from the input space to a space of classification probabilities that follows the "general" shape of the data, but avoids overfitting by avoiding memorization of individual data points.
no code implementations • 24 Jun 2019 • Claes Strannegård, Herman Carlström, Niklas Engsner, Fredrik Mäkeläinen, Filip Slottner Seholm, Morteza Haghir Chehreghani
We present a deep neural-network model for lifelong learning inspired by several forms of neuroplasticity.