Search Results for author: Ruben Mayer

Found 16 papers, 5 papers with code

Choosing a Classical Planner with Graph Neural Networks

no code implementations25 Jan 2024 Jana Vatter, Ruben Mayer, Hans-Arno Jacobsen, Horst Samulowitz, Michael Katz

Thus, the ability to predict their performance on a given problem is of great importance.

A Survey on Efficient Federated Learning Methods for Foundation Model Training

no code implementations9 Jan 2024 Herbert Woisetschläger, Alexander Isenko, Shiqiang Wang, Ruben Mayer, Hans-Arno Jacobsen

We discuss the benefits and drawbacks of parameter-efficient fine-tuning (PEFT) for FL applications, elaborate on the readiness of FL frameworks to work with FMs and provide future research opportunities on how to evaluate generative models in FL as well as the interplay of privacy and PEFT.

Federated Learning Privacy Preserving

Federated Fine-Tuning of LLMs on the Very Edge: The Good, the Bad, the Ugly

no code implementations4 Oct 2023 Herbert Woisetschläger, Alexander Isenko, Shiqiang Wang, Ruben Mayer, Hans-Arno Jacobsen

Large Language Models (LLM) and foundation models are popular as they offer new opportunities for individuals and businesses to improve natural language processing, interact with data, and retrieve information faster.

Computational Efficiency Edge-computing +2

How Can We Train Deep Learning Models Across Clouds and Continents? An Experimental Study

1 code implementation5 Jun 2023 Alexander Erben, Ruben Mayer, Hans-Arno Jacobsen

This paper aims to answer the question: Can deep learning models be cost-efficiently trained on a global market of spot VMs spanning different data centers and cloud providers?

A Survey on Dataset Distillation: Approaches, Applications and Future Directions

1 code implementation3 May 2023 Jiahui Geng, Zongxiong Chen, Yuandou Wang, Herbert Woisetschlaeger, Sonja Schimmler, Ruben Mayer, Zhiming Zhao, Chunming Rong

Dataset distillation is attracting more attention in machine learning as training sets continue to grow and the cost of training state-of-the-art models becomes increasingly high.

Continual Learning Neural Architecture Search

The DEBS 2022 Grand Challenge: Detecting Trading Trends in Financial Tick Data

1 code implementation23 Jun 2022 Sebastian Frischbier, Jawad Tahir, Christoph Doblander, Arne Hormann, Ruben Mayer, Hans-Arno Jacobsen

The DEBS Grand Challenge (GC) is an annual programming competition open to practitioners from both academia and industry.

Benchmarking

Where Is My Training Bottleneck? Hidden Trade-Offs in Deep Learning Preprocessing Pipelines

1 code implementation17 Feb 2022 Alexander Isenko, Ruben Mayer, Jeffrey Jedele, Hans-Arno Jacobsen

As a consequence of this development, data preprocessing and provisioning are becoming a severe bottleneck in end-to-end deep learning pipelines.

Scalable Deep Learning on Distributed Infrastructures: Challenges, Techniques and Tools

no code implementations27 Mar 2019 Ruben Mayer, Hans-Arno Jacobsen

One of the reasons for this success is the increasing size of DL models and the proliferation of vast amounts of training data being available.

Management Scheduling

A Comprehensive Survey on Parallelization and Elasticity in Stream Processing

no code implementations28 Jan 2019 Henriette Röger, Ruben Mayer

Parallelization and elasticity enables SP systems to process these streams with continuously high quality of service.

Distributed, Parallel, and Cluster Computing

HYPE: Massive Hypergraph Partitioning with Neighborhood Expansion

1 code implementation26 Oct 2018 Christian Mayer, Ruben Mayer, Sukanya Bhowmik, Lukas Epple, Kurt Rothermel

In such a model, vertices represent entities-such as users or data records-whereas hyperedges model a group membership of the vertices-such as the authorship in a specific topic or the membership of a data record in a specific replicated shard.

Distributed, Parallel, and Cluster Computing Data Structures and Algorithms Social and Information Networks

Cannot find the paper you are looking for? You can Submit a new open access paper.