no code implementations • 1 Jul 2024 • Sebastian Dalleiger, Jilles Vreeken, Michael Kamp
Identifying informative components in binary data is an essential task in many research areas, including life sciences, social sciences, and recommendation systems.
no code implementations • 24 Jun 2024 • Sidak Pal Singh, Linara Adilova, Michael Kamp, Asja Fischer, Bernhard Schölkopf, Thomas Hofmann
In this work, we take a step towards understanding it by providing a model of how the loss landscape needs to behave topographically for LMC (or the lack thereof) to manifest.
1 code implementation • 27 May 2024 • Nils Philipp Walter, Linara Adilova, Jilles Vreeken, Michael Kamp
Flatness of the loss surface not only correlates positively with generalization but is also related to adversarial robustness, since perturbations of inputs relate non-linearly to perturbations of weights.
1 code implementation • 24 Feb 2024 • Fan Yang, Pierre Le Bodic, Michael Kamp, Mario Boley
Gradient boosting of prediction rules is an efficient approach to learn potentially interpretable yet accurate probabilistic models.
no code implementations • 9 Oct 2023 • Amr Abourayya, Jens Kleesiek, Kanishka Rao, Erman Ayday, Bharat Rao, Geoff Webb, Michael Kamp
We propose a federated co-training (FedCT) approach that improves privacy by sharing only definitive (hard) labels on a public unlabeled dataset.
1 code implementation • 30 Aug 2023 • Jianning Li, Zongwei Zhou, Jiancheng Yang, Antonio Pepe, Christina Gsaxner, Gijs Luijten, Chongyu Qu, Tiezheng Zhang, Xiaoxi Chen, Wenxuan Li, Marek Wodzinski, Paul Friedrich, Kangxian Xie, Yuan Jin, Narmada Ambigapathy, Enrico Nasca, Naida Solak, Gian Marco Melito, Viet Duc Vu, Afaque R. Memon, Christopher Schlachta, Sandrine de Ribaupierre, Rajnikant Patel, Roy Eagleson, Xiaojun Chen, Heinrich Mächler, Jan Stefan Kirschke, Ezequiel de la Rosa, Patrick Ferdinand Christ, Hongwei Bran Li, David G. Ellis, Michele R. Aizenberg, Sergios Gatidis, Thomas Küstner, Nadya Shusharina, Nicholas Heller, Vincent Andrearczyk, Adrien Depeursinge, Mathieu Hatt, Anjany Sekuboyina, Maximilian Löffler, Hans Liebl, Reuben Dorent, Tom Vercauteren, Jonathan Shapey, Aaron Kujawa, Stefan Cornelissen, Patrick Langenhuizen, Achraf Ben-Hamadou, Ahmed Rekik, Sergi Pujades, Edmond Boyer, Federico Bolelli, Costantino Grana, Luca Lumetti, Hamidreza Salehi, Jun Ma, Yao Zhang, Ramtin Gharleghi, Susann Beier, Arcot Sowmya, Eduardo A. Garza-Villarreal, Thania Balducci, Diego Angeles-Valdez, Roberto Souza, Leticia Rittner, Richard Frayne, Yuanfeng Ji, Vincenzo Ferrari, Soumick Chatterjee, Florian Dubost, Stefanie Schreiber, Hendrik Mattern, Oliver Speck, Daniel Haehn, Christoph John, Andreas Nürnberger, João Pedrosa, Carlos Ferreira, Guilherme Aresta, António Cunha, Aurélio Campilho, Yannick Suter, Jose Garcia, Alain Lalande, Vicky Vandenbossche, Aline Van Oevelen, Kate Duquesne, Hamza Mekhzoum, Jef Vandemeulebroucke, Emmanuel Audenaert, Claudia Krebs, Timo Van Leeuwen, Evie Vereecke, Hauke Heidemeyer, Rainer Röhrig, Frank Hölzle, Vahid Badeli, Kathrin Krieger, Matthias Gunzer, Jianxu Chen, Timo van Meegdenburg, Amin Dada, Miriam Balzer, Jana Fragemann, Frederic Jonske, Moritz Rempe, Stanislav Malorodov, Fin H. Bahnsen, Constantin Seibold, Alexander Jaus, Zdravko Marinov, Paul F. Jaeger, Rainer Stiefelhagen, Ana Sofia Santos, Mariana Lindo, André Ferreira, Victor Alves, Michael Kamp, Amr Abourayya, Felix Nensa, Fabian Hörst, Alexander Brehmer, Lukas Heine, Yannik Hanusrichter, Martin Weßling, Marcel Dudda, Lars E. Podleska, Matthias A. Fink, Julius Keyl, Konstantinos Tserpes, Moon-Sung Kim, Shireen Elhabian, Hans Lamecker, Dženan Zukić, Beatriz Paniagua, Christian Wachinger, Martin Urschler, Luc Duong, Jakob Wasserthal, Peter F. Hoyer, Oliver Basu, Thomas Maal, Max J. H. Witjes, Gregor Schiele, Ti-chiun Chang, Seyed-Ahmad Ahmadi, Ping Luo, Bjoern Menze, Mauricio Reyes, Thomas M. Deserno, Christos Davatzikos, Behrus Puladi, Pascal Fua, Alan L. Yuille, Jens Kleesiek, Jan Egger
For the medical domain, we present a large collection of anatomical shapes (e. g., bones, organs, vessels) and 3D models of surgical instrument, called MedShapeNet, created to facilitate the translation of data-driven vision algorithms to medical applications and to adapt SOTA vision algorithms to medical problems.
1 code implementation • 13 Jul 2023 • Linara Adilova, Maksym Andriushchenko, Michael Kamp, Asja Fischer, Martin Jaggi
Averaging neural network parameters is an intuitive method for fusing the knowledge of two independent models.
1 code implementation • 5 Jul 2023 • Linara Adilova, Amr Abourayya, Jianning Li, Amin Dada, Henning Petzka, Jan Egger, Jens Kleesiek, Michael Kamp
Their widespread adoption in practice, though, is dubious because of the lack of theoretically grounded connection between flatness and generalization, in particular in light of the reparameterization curse - certain reparameterizations of a neural network change most flatness measures but do not change generalization.
1 code implementation • 30 Jun 2023 • Frederic Jonske, Moon Kim, Enrico Nasca, Janis Evers, Johannes Haubold, René Hosch, Felix Nensa, Michael Kamp, Constantin Seibold, Jan Egger, Jens Kleesiek
It is an open secret that ImageNet is treated as the panacea of pretraining.
1 code implementation • 25 Nov 2022 • Jianning Li, André Ferreira, Behrus Puladi, Victor Alves, Michael Kamp, Moon-Sung Kim, Felix Nensa, Jens Kleesiek, Seyed-Ahmad Ahmadi, Jan Egger
The primary goal of this paper lies in the investigation of open-sourcing codes and pre-trained deep learning models under the MONAI framework.
no code implementations • 19 Oct 2021 • Meirui Jiang, Xiaoxiao Li, Xiaofei Zhang, Michael Kamp, Qi Dou
In this work, we propose a unified framework to tackle the non-iid issues for internal and external clients together.
1 code implementation • 7 Oct 2021 • Michael Kamp, Jonas Fischer, Jilles Vreeken
Federated learning allows multiple parties to collaboratively train a joint model without sharing local data.
no code implementations • 29 Sep 2021 • Michael Kamp, Jonas Fischer, Jilles Vreeken
Federated learning allows multiple parties to collaboratively train a joint model without sharing local data.
1 code implementation • 5 Mar 2021 • Linara Adilova, Siming Chen, Michael Kamp
We propose to approach this challenge through decomposition: by clustering the data we break down the problem, obtaining simpler modeling task in each cluster which can be modeled more accurately.
4 code implementations • ICLR 2021 • Xiaoxiao Li, Meirui Jiang, Xiaofei Zhang, Michael Kamp, Qi Dou
The emerging paradigm of federated learning (FL) strives to enable collaborative training of deep models on the network edge without centrally aggregating raw data and hence improving data privacy.
no code implementations • 25 Sep 2020 • Lukas Heppe, Michael Kamp, Linara Adilova, Danny Heinrich, Nico Piatkowski, Katharina Morik
This paper investigates an approach to communication-efficient on-device learning of integer exponential families that can be executed on low-power processors, is privacy-preserving, and effectively minimizes communication.
1 code implementation • NeurIPS 2021 • Henning Petzka, Michael Kamp, Linara Adilova, Cristian Sminchisescu, Mario Boley
Flatness of the loss curve is conjectured to be connected to the generalization ability of machine learning models, in particular neural networks.
no code implementations • 29 Nov 2019 • Henning Petzka, Linara Adilova, Michael Kamp, Cristian Sminchisescu
The performance of deep neural networks is often attributed to their automated, task-related feature construction.
no code implementations • 28 Nov 2019 • Michael Kamp, Mario Boley, Michael Mock, Daniel Keren, Assaf Schuster, Izchak Sharfman
The learning performance of such a protocol is intuitively optimal if approximately the same loss is incurred as in a hypothetical serial setting.
no code implementations • 28 Nov 2019 • Michael Kamp, Sebastian Bothe, Mario Boley, Michael Mock
It extends a previously presented protocol to kernelized online learners that represent their models by a support vector expansion.
no code implementations • 15 Nov 2019 • Linara Adilova, Julia Rosenzweig, Michael Kamp
An approach to distributed machine learning is to train models on local datasets and aggregate these models into a single, stronger model.
no code implementations • 25 Sep 2019 • Henning Petzka, Linara Adilova, Michael Kamp, Cristian Sminchisescu
With this, the generalization error of a model trained on representative data can be bounded by its feature robustness which depends on our novel flatness measure.
no code implementations • 1 Jul 2019 • Linara Adilova, Livin Natious, Siming Chen, Olivier Thonnard, Michael Kamp
One of the main tasks of cybersecurity is recognizing malicious interactions with an arbitrary system.
2 code implementations • 30 Nov 2018 • Sven Giesselbach, Katrin Ullrich, Michael Kamp, Daniel Paurat, Thomas Gärtner
We propose a novel transfer learning approach for orphan screening called corresponding projections.
no code implementations • NeurIPS 2017 • Michael Kamp, Mario Boley, Olana Missura, Thomas Gärtner
We present a novel parallelisation scheme that simplifies the adaptation of learning algorithms to growing amounts of data as well as growing needs for accurate and confident predictions in critical applications.
1 code implementation • 9 Jul 2018 • Michael Kamp, Linara Adilova, Joachim Sicking, Fabian Hüger, Peter Schlicht, Tim Wirtz, Stefan Wrobel
We propose an efficient protocol for decentralized training of deep neural networks from distributed data sources.