E2ETune: End-to-End Knob Tuning via Fine-tuned Generative Language Model

Database knob tuning is a significant challenge for database administrators, as it involves tuning a large number of configuration knobs with continuous or discrete values to achieve optimal database performance. Traditional methods, such as manual tuning or learning-based approaches, typically require numerous workload replays and are both time-consuming and resource-intensive. To address this challenge, we introduce E2ETune, an end-to-end knob tuner powered by a fine-tuned generative language model. The key idea is to leverage the exceptional sequence-to-sequence modeling capabilities of generative language models to capture the complex mapping between workloads (inputs) and their corresponding promising configurations (outputs). To achieve this goal, we propose a novel data generation framework to efficiently produce a large amount of training data, where each data sample consists of a workload and its promising configuration. Then, these data are used to fine-tune a generative language model, yielding an end-to-end knob tuner. This tuner offers out-of-the-box configuration recommendations for new workloads. We conduct extensive experiments to evaluate E2ETune's efficiency and effectiveness using 10 representative and 3 real-world benchmarks. Compared to state-of-the-art methods, E2ETune can identify competitive configurations in significantly less time.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods