no code implementations • 25 Feb 2025 • Mohamed Aboelenien Ahmed, Kilian Pfeiffer, Heba Khdr, Osama Abboud, Ramin Khalili, Jörg Henkel
To accelerate training, we propose to jointly adjust the system and application parameters (in our case, the GPU frequency and the batch size of the training task) while adhering to the power constraints on devices.
no code implementations • 14 Feb 2025 • Mohamed Aboelenien Ahmed, Kilian Pfeiffer, Ramin Khalili, Heba Khdr, Jörg Henkel
Federated fine-tuning offers a promising approach for tuning Large Language Models (LLMs) on edge devices while preserving data privacy.
no code implementations • 12 Nov 2024 • Kilian Pfeiffer, Mohamed Aboelenien Ahmed, Ramin Khalili, Jörg Henkel
In recent years, Large Language Models (LLMs) through Transformer structures have dominated many machine learning tasks, especially text processing.
no code implementations • 18 Jul 2023 • Kilian Pfeiffer, Martin Rapp, Ramin Khalili, Jörg Henkel
With an increasing number of smart devices like internet of things (IoT) devices deployed in the field, offloadingtraining of neural networks (NNs) to a central server becomes more and more infeasible.
1 code implementation • NeurIPS 2023 • Kilian Pfeiffer, Ramin Khalili, Jörg Henkel
If the required memory to train a model exceeds this limit, the device will be excluded from the training.
1 code implementation • 10 Mar 2022 • Kilian Pfeiffer, Martin Rapp, Ramin Khalili, Jörg Henkel
To adapt to the devices' heterogeneous resources, CoCoFL freezes and quantizes selected layers, reducing communication, computation, and memory requirements, whereas other layers are still trained in full precision, enabling to reach a high accuracy.
no code implementations • 16 Dec 2021 • Martin Rapp, Ramin Khalili, Kilian Pfeiffer, Jörg Henkel
We study the problem of distributed training of neural networks (NNs) on devices with heterogeneous, limited, and time-varying availability of computational resources.
no code implementations • 7 Jun 2019 • Kilian Pfeiffer, Alexander Hermans, István Sárándi, Mark Weber, Bastian Leibe
We address the problem of learning a single model for person re-identification, attribute classification, body part segmentation, and pose estimation.