no code implementations • 28 Nov 2023 • Martin Briesch, Dominik Sobania, Franz Rothlauf
Therefore, a self-consuming training loop emerges in which new LLM generations are trained on the output from the previous generations.
no code implementations • 18 Oct 2023 • Alexander E. I. Brownlee, James Callan, Karine Even-Mendoza, Alina Geiger, Carol Hanna, Justyna Petke, Federica Sarro, Dominik Sobania
We find that the number of patches passing unit tests is up to 75% higher with LLM-based edits than with standard Insert edits.
no code implementations • 5 Sep 2023 • Martin Huschens, Martin Briesch, Dominik Sobania, Franz Rothlauf
This paper examines how individuals perceive the credibility of content originating from human authors versus content generated by large language models, like the GPT language model family that powers ChatGPT, in different user interface versions.
no code implementations • 14 Apr 2023 • Ryan Boldi, Ashley Bao, Martin Briesch, Thomas Helmuth, Dominik Sobania, Lee Spector, Alexander Lalejini
We verified that down-sampling can benefit the problem-solving success of both fitness-proportionate and tournament selection.
no code implementations • 8 Feb 2023 • Alina Geiger, Dominik Sobania, Franz Rothlauf
However, the influence of subsampling on the solution quality of real-world symbolic regression problems has not yet been studied.
no code implementations • 20 Jan 2023 • Dominik Sobania, Martin Briesch, Philipp Röchner, Franz Rothlauf
As in practice, the training cases have to be expensively hand-labeled by the user, we need an approach to check the program behavior with a lower number of training cases.
no code implementations • 4 Jan 2023 • Ryan Boldi, Martin Briesch, Dominik Sobania, Alexander Lalejini, Thomas Helmuth, Franz Rothlauf, Charles Ofria, Lee Spector
Random down-sampled lexicase selection evaluates individuals on only a random subset of the training cases allowing for more individuals to be explored with the same amount of program executions.
no code implementations • 15 Nov 2021 • Dominik Sobania, Martin Briesch, Franz Rothlauf
We find that the performance of the two approaches on the benchmark problems is quite similar, however, in comparison to GitHub Copilot, the program synthesis approaches based on genetic programming are not yet mature enough to support programmers in practical software development.
no code implementations • 27 Aug 2021 • Dominik Sobania, Dirk Schweim, Franz Rothlauf
On problems where this mapping is complex, e. g., if the problem consists of several sub-problems or requires iteration/recursion for a correct solution, results tend to be worse.
no code implementations • 8 Jun 2021 • Martin Briesch, Dominik Sobania, Franz Rothlauf
Over-parameterized models can perfectly learn various types of data distributions, however, generalization error is usually lower for real data in comparison to artificial data.