1 code implementation • 13 Jul 2023 • Linara Adilova, Maksym Andriushchenko, Michael Kamp, Asja Fischer, Martin Jaggi
Averaging neural network parameters is an intuitive method for fusing the knowledge of two independent models.
1 code implementation • 5 Jul 2023 • Linara Adilova, Amr Abourayya, Jianning Li, Amin Dada, Henning Petzka, Jan Egger, Jens Kleesiek, Michael Kamp
Their widespread adoption in practice, though, is dubious because of the lack of theoretically grounded connection between flatness and generalization, in particular in light of the reparameterization curse - certain reparameterizations of a neural network change most flatness measures but do not change generalization.
no code implementations • 20 Jun 2023 • Maximilian Poretschkin, Anna Schmitz, Maram Akila, Linara Adilova, Daniel Becker, Armin B. Cremers, Dirk Hecker, Sebastian Houben, Michael Mock, Julia Rosenzweig, Joachim Sicking, Elena Schulz, Angelika Voss, Stefan Wrobel
Artificial Intelligence (AI) has made impressive progress in recent years and represents a key technology that has a crucial impact on the economy and society.
no code implementations • 19 Apr 2021 • Linara Adilova, Elena Schulz, Maram Akila, Sebastian Houben, Jan David Schneider, Fabian Hueger, Tim Wirtz
Data-driven sensor interpretation in autonomous driving can lead to highly implausible predictions as can most of the time be verified with common-sense knowledge.
1 code implementation • 5 Mar 2021 • Linara Adilova, Siming Chen, Michael Kamp
We propose to approach this challenge through decomposition: by clustering the data we break down the problem, obtaining simpler modeling task in each cluster which can be modeled more accurately.
no code implementations • 25 Sep 2020 • Lukas Heppe, Michael Kamp, Linara Adilova, Danny Heinrich, Nico Piatkowski, Katharina Morik
This paper investigates an approach to communication-efficient on-device learning of integer exponential families that can be executed on low-power processors, is privacy-preserving, and effectively minimizes communication.
1 code implementation • NeurIPS 2021 • Henning Petzka, Michael Kamp, Linara Adilova, Cristian Sminchisescu, Mario Boley
Flatness of the loss curve is conjectured to be connected to the generalization ability of machine learning models, in particular neural networks.
no code implementations • 29 Nov 2019 • Henning Petzka, Linara Adilova, Michael Kamp, Cristian Sminchisescu
The performance of deep neural networks is often attributed to their automated, task-related feature construction.
no code implementations • 15 Nov 2019 • Linara Adilova, Julia Rosenzweig, Michael Kamp
An approach to distributed machine learning is to train models on local datasets and aggregate these models into a single, stronger model.
no code implementations • 25 Sep 2019 • Henning Petzka, Linara Adilova, Michael Kamp, Cristian Sminchisescu
With this, the generalization error of a model trained on representative data can be bounded by its feature robustness which depends on our novel flatness measure.
no code implementations • 1 Jul 2019 • Linara Adilova, Livin Natious, Siming Chen, Olivier Thonnard, Michael Kamp
One of the main tasks of cybersecurity is recognizing malicious interactions with an arbitrary system.
no code implementations • 27 Sep 2018 • Linara Adilova, Nathalie Paul, Peter Schlicht
It has been shown that injecting noise into the neural network weights during the training process leads to a better generalization of the resulting model.
no code implementations • 12 Jul 2018 • Linara Adilova, Sven Giesselbach, Stefan Rüping
In this paper, we report on an alternative approach where we first construct a relation extraction model using distant supervision, and only later make use of a domain expert to refine the results.
1 code implementation • 9 Jul 2018 • Michael Kamp, Linara Adilova, Joachim Sicking, Fabian Hüger, Peter Schlicht, Tim Wirtz, Stefan Wrobel
We propose an efficient protocol for decentralized training of deep neural networks from distributed data sources.