Search Results for author: Linara Adilova

Found 14 papers, 5 papers with code

Layer-wise Linear Mode Connectivity

1 code implementation13 Jul 2023 Linara Adilova, Maksym Andriushchenko, Michael Kamp, Asja Fischer, Martin Jaggi

Averaging neural network parameters is an intuitive method for fusing the knowledge of two independent models.

Federated Learning Linear Mode Connectivity

FAM: Relative Flatness Aware Minimization

1 code implementation5 Jul 2023 Linara Adilova, Amr Abourayya, Jianning Li, Amin Dada, Henning Petzka, Jan Egger, Jens Kleesiek, Michael Kamp

Their widespread adoption in practice, though, is dubious because of the lack of theoretically grounded connection between flatness and generalization, in particular in light of the reparameterization curse - certain reparameterizations of a neural network change most flatness measures but do not change generalization.

Guideline for Trustworthy Artificial Intelligence -- AI Assessment Catalog

no code implementations20 Jun 2023 Maximilian Poretschkin, Anna Schmitz, Maram Akila, Linara Adilova, Daniel Becker, Armin B. Cremers, Dirk Hecker, Sebastian Houben, Michael Mock, Julia Rosenzweig, Joachim Sicking, Elena Schulz, Angelika Voss, Stefan Wrobel

Artificial Intelligence (AI) has made impressive progress in recent years and represents a key technology that has a crucial impact on the economy and society.

Plants Don't Walk on the Street: Common-Sense Reasoning for Reliable Semantic Segmentation

no code implementations19 Apr 2021 Linara Adilova, Elena Schulz, Maram Akila, Sebastian Houben, Jan David Schneider, Fabian Hueger, Tim Wirtz

Data-driven sensor interpretation in autonomous driving can lead to highly implausible predictions as can most of the time be verified with common-sense knowledge.

Autonomous Driving Common Sense Reasoning +2

Novelty Detection in Sequential Data by Informed Clustering and Modeling

1 code implementation5 Mar 2021 Linara Adilova, Siming Chen, Michael Kamp

We propose to approach this challenge through decomposition: by clustering the data we break down the problem, obtaining simpler modeling task in each cluster which can be modeled more accurately.

Clustering Novelty Detection

Resource-Constrained On-Device Learning by Dynamic Averaging

no code implementations25 Sep 2020 Lukas Heppe, Michael Kamp, Linara Adilova, Danny Heinrich, Nico Piatkowski, Katharina Morik

This paper investigates an approach to communication-efficient on-device learning of integer exponential families that can be executed on low-power processors, is privacy-preserving, and effectively minimizes communication.

BIG-bench Machine Learning Privacy Preserving

Relative Flatness and Generalization

1 code implementation NeurIPS 2021 Henning Petzka, Michael Kamp, Linara Adilova, Cristian Sminchisescu, Mario Boley

Flatness of the loss curve is conjectured to be connected to the generalization ability of machine learning models, in particular neural networks.

Generalization Bounds

A Reparameterization-Invariant Flatness Measure for Deep Neural Networks

no code implementations29 Nov 2019 Henning Petzka, Linara Adilova, Michael Kamp, Cristian Sminchisescu

The performance of deep neural networks is often attributed to their automated, task-related feature construction.

Open-Ended Question Answering

Information-Theoretic Perspective of Federated Learning

no code implementations15 Nov 2019 Linara Adilova, Julia Rosenzweig, Michael Kamp

An approach to distributed machine learning is to train models on local datasets and aggregate these models into a single, stronger model.

Federated Learning Open-Ended Question Answering

Feature-Robustness, Flatness and Generalization Error for Deep Neural Networks

no code implementations25 Sep 2019 Henning Petzka, Linara Adilova, Michael Kamp, Cristian Sminchisescu

With this, the generalization error of a model trained on representative data can be bounded by its feature robustness which depends on our novel flatness measure.

Open-Ended Question Answering

Introducing Noise in Decentralized Training of Neural Networks

no code implementations27 Sep 2018 Linara Adilova, Nathalie Paul, Peter Schlicht

It has been shown that injecting noise into the neural network weights during the training process leads to a better generalization of the resulting model.

Making Efficient Use of a Domain Expert's Time in Relation Extraction

no code implementations12 Jul 2018 Linara Adilova, Sven Giesselbach, Stefan Rüping

In this paper, we report on an alternative approach where we first construct a relation extraction model using distant supervision, and only later make use of a domain expert to refine the results.

Active Learning Relation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.