Search Results for author: Michael Kamp

Found 23 papers, 12 papers with code

Orthogonal Gradient Boosting for Simpler Additive Rule Ensembles

1 code implementation24 Feb 2024 Fan Yang, Pierre Le Bodic, Michael Kamp, Mario Boley

Gradient boosting of prediction rules is an efficient approach to learn potentially interpretable yet accurate probabilistic models.

Protecting Sensitive Data through Federated Co-Training

no code implementations9 Oct 2023 Amr Abourayya, Jens Kleesiek, Kanishka Rao, Erman Ayday, Bharat Rao, Geoff Webb, Michael Kamp

Federated learning allows us to collaboratively train a model without pooling the data by iteratively aggregating the parameters of local models.

Federated Learning

MedShapeNet -- A Large-Scale Dataset of 3D Medical Shapes for Computer Vision

1 code implementation30 Aug 2023 Jianning Li, Zongwei Zhou, Jiancheng Yang, Antonio Pepe, Christina Gsaxner, Gijs Luijten, Chongyu Qu, Tiezheng Zhang, Xiaoxi Chen, Wenxuan Li, Marek Wodzinski, Paul Friedrich, Kangxian Xie, Yuan Jin, Narmada Ambigapathy, Enrico Nasca, Naida Solak, Gian Marco Melito, Viet Duc Vu, Afaque R. Memon, Christopher Schlachta, Sandrine de Ribaupierre, Rajnikant Patel, Roy Eagleson, Xiaojun Chen, Heinrich Mächler, Jan Stefan Kirschke, Ezequiel de la Rosa, Patrick Ferdinand Christ, Hongwei Bran Li, David G. Ellis, Michele R. Aizenberg, Sergios Gatidis, Thomas Küstner, Nadya Shusharina, Nicholas Heller, Vincent Andrearczyk, Adrien Depeursinge, Mathieu Hatt, Anjany Sekuboyina, Maximilian Löffler, Hans Liebl, Reuben Dorent, Tom Vercauteren, Jonathan Shapey, Aaron Kujawa, Stefan Cornelissen, Patrick Langenhuizen, Achraf Ben-Hamadou, Ahmed Rekik, Sergi Pujades, Edmond Boyer, Federico Bolelli, Costantino Grana, Luca Lumetti, Hamidreza Salehi, Jun Ma, Yao Zhang, Ramtin Gharleghi, Susann Beier, Arcot Sowmya, Eduardo A. Garza-Villarreal, Thania Balducci, Diego Angeles-Valdez, Roberto Souza, Leticia Rittner, Richard Frayne, Yuanfeng Ji, Vincenzo Ferrari, Soumick Chatterjee, Florian Dubost, Stefanie Schreiber, Hendrik Mattern, Oliver Speck, Daniel Haehn, Christoph John, Andreas Nürnberger, João Pedrosa, Carlos Ferreira, Guilherme Aresta, António Cunha, Aurélio Campilho, Yannick Suter, Jose Garcia, Alain Lalande, Vicky Vandenbossche, Aline Van Oevelen, Kate Duquesne, Hamza Mekhzoum, Jef Vandemeulebroucke, Emmanuel Audenaert, Claudia Krebs, Timo Van Leeuwen, Evie Vereecke, Hauke Heidemeyer, Rainer Röhrig, Frank Hölzle, Vahid Badeli, Kathrin Krieger, Matthias Gunzer, Jianxu Chen, Timo van Meegdenburg, Amin Dada, Miriam Balzer, Jana Fragemann, Frederic Jonske, Moritz Rempe, Stanislav Malorodov, Fin H. Bahnsen, Constantin Seibold, Alexander Jaus, Zdravko Marinov, Paul F. Jaeger, Rainer Stiefelhagen, Ana Sofia Santos, Mariana Lindo, André Ferreira, Victor Alves, Michael Kamp, Amr Abourayya, Felix Nensa, Fabian Hörst, Alexander Brehmer, Lukas Heine, Yannik Hanusrichter, Martin Weßling, Marcel Dudda, Lars E. Podleska, Matthias A. Fink, Julius Keyl, Konstantinos Tserpes, Moon-Sung Kim, Shireen Elhabian, Hans Lamecker, Dženan Zukić, Beatriz Paniagua, Christian Wachinger, Martin Urschler, Luc Duong, Jakob Wasserthal, Peter F. Hoyer, Oliver Basu, Thomas Maal, Max J. H. Witjes, Gregor Schiele, Ti-chiun Chang, Seyed-Ahmad Ahmadi, Ping Luo, Bjoern Menze, Mauricio Reyes, Thomas M. Deserno, Christos Davatzikos, Behrus Puladi, Pascal Fua, Alan L. Yuille, Jens Kleesiek, Jan Egger

For the medical domain, we present a large collection of anatomical shapes (e. g., bones, organs, vessels) and 3D models of surgical instrument, called MedShapeNet, created to facilitate the translation of data-driven vision algorithms to medical applications and to adapt SOTA vision algorithms to medical problems.

Anatomy Mixed Reality

Layer-wise Linear Mode Connectivity

1 code implementation13 Jul 2023 Linara Adilova, Maksym Andriushchenko, Michael Kamp, Asja Fischer, Martin Jaggi

Averaging neural network parameters is an intuitive method for fusing the knowledge of two independent models.

Federated Learning Linear Mode Connectivity

FAM: Relative Flatness Aware Minimization

1 code implementation5 Jul 2023 Linara Adilova, Amr Abourayya, Jianning Li, Amin Dada, Henning Petzka, Jan Egger, Jens Kleesiek, Michael Kamp

Their widespread adoption in practice, though, is dubious because of the lack of theoretically grounded connection between flatness and generalization, in particular in light of the reparameterization curse - certain reparameterizations of a neural network change most flatness measures but do not change generalization.

Open-Source Skull Reconstruction with MONAI

1 code implementation25 Nov 2022 Jianning Li, André Ferreira, Behrus Puladi, Victor Alves, Michael Kamp, Moon-Sung Kim, Felix Nensa, Jens Kleesiek, Seyed-Ahmad Ahmadi, Jan Egger

The primary goal of this paper lies in the investigation of open-sourcing codes and pre-trained deep learning models under the MONAI framework.

C++ code

UniFed: A Unified Framework for Federated Learning on Non-IID Image Features

no code implementations19 Oct 2021 Meirui Jiang, Xiaoxiao Li, Xiaofei Zhang, Michael Kamp, Qi Dou

In this work, we propose a unified framework to tackle the non-iid issues for internal and external clients together.

Domain Generalization Federated Learning +1

Federated Learning from Small Datasets

1 code implementation7 Oct 2021 Michael Kamp, Jonas Fischer, Jilles Vreeken

Federated learning allows multiple parties to collaboratively train a joint model without sharing local data.

Federated Learning

Picking Daisies in Private: Federated Learning from Small Datasets

no code implementations29 Sep 2021 Michael Kamp, Jonas Fischer, Jilles Vreeken

Federated learning allows multiple parties to collaboratively train a joint model without sharing local data.

Federated Learning

Novelty Detection in Sequential Data by Informed Clustering and Modeling

1 code implementation5 Mar 2021 Linara Adilova, Siming Chen, Michael Kamp

We propose to approach this challenge through decomposition: by clustering the data we break down the problem, obtaining simpler modeling task in each cluster which can be modeled more accurately.

Clustering Novelty Detection

FedBN: Federated Learning on Non-IID Features via Local Batch Normalization

4 code implementations ICLR 2021 Xiaoxiao Li, Meirui Jiang, Xiaofei Zhang, Michael Kamp, Qi Dou

The emerging paradigm of federated learning (FL) strives to enable collaborative training of deep models on the network edge without centrally aggregating raw data and hence improving data privacy.

Autonomous Driving Federated Learning

Resource-Constrained On-Device Learning by Dynamic Averaging

no code implementations25 Sep 2020 Lukas Heppe, Michael Kamp, Linara Adilova, Danny Heinrich, Nico Piatkowski, Katharina Morik

This paper investigates an approach to communication-efficient on-device learning of integer exponential families that can be executed on low-power processors, is privacy-preserving, and effectively minimizes communication.

BIG-bench Machine Learning Privacy Preserving

Relative Flatness and Generalization

1 code implementation NeurIPS 2021 Henning Petzka, Michael Kamp, Linara Adilova, Cristian Sminchisescu, Mario Boley

Flatness of the loss curve is conjectured to be connected to the generalization ability of machine learning models, in particular neural networks.

Generalization Bounds

A Reparameterization-Invariant Flatness Measure for Deep Neural Networks

no code implementations29 Nov 2019 Henning Petzka, Linara Adilova, Michael Kamp, Cristian Sminchisescu

The performance of deep neural networks is often attributed to their automated, task-related feature construction.

Open-Ended Question Answering

Communication-Efficient Distributed Online Learning with Kernels

no code implementations28 Nov 2019 Michael Kamp, Sebastian Bothe, Mario Boley, Michael Mock

It extends a previously presented protocol to kernelized online learners that represent their models by a support vector expansion.

Model Compression

Adaptive Communication Bounds for Distributed Online Learning

no code implementations28 Nov 2019 Michael Kamp, Mario Boley, Michael Mock, Daniel Keren, Assaf Schuster, Izchak Sharfman

The learning performance of such a protocol is intuitively optimal if approximately the same loss is incurred as in a hypothetical serial setting.

Information-Theoretic Perspective of Federated Learning

no code implementations15 Nov 2019 Linara Adilova, Julia Rosenzweig, Michael Kamp

An approach to distributed machine learning is to train models on local datasets and aggregate these models into a single, stronger model.

Federated Learning Open-Ended Question Answering

Feature-Robustness, Flatness and Generalization Error for Deep Neural Networks

no code implementations25 Sep 2019 Henning Petzka, Linara Adilova, Michael Kamp, Cristian Sminchisescu

With this, the generalization error of a model trained on representative data can be bounded by its feature robustness which depends on our novel flatness measure.

Open-Ended Question Answering

Corresponding Projections for Orphan Screening

2 code implementations30 Nov 2018 Sven Giesselbach, Katrin Ullrich, Michael Kamp, Daniel Paurat, Thomas Gärtner

We propose a novel transfer learning approach for orphan screening called corresponding projections.

Drug Discovery Transfer Learning

Effective Parallelisation for Machine Learning

no code implementations NeurIPS 2017 Michael Kamp, Mario Boley, Olana Missura, Thomas Gärtner

We present a novel parallelisation scheme that simplifies the adaptation of learning algorithms to growing amounts of data as well as growing needs for accurate and confident predictions in critical applications.

BIG-bench Machine Learning Open-Ended Question Answering

Cannot find the paper you are looking for? You can Submit a new open access paper.