1 code implementation • 13 Nov 2023 • Marc Kaufmann, Maxime Larcher, Johannes Lengler, Oliver Sieberling
Recently, Kaufmann, Larcher, Lengler and Zou conjectured that for the self-adjusting $(1,\lambda)$-EA, Adversarial Dynamic BinVal (ADBV) is the hardest dynamic monotone function to optimize.
no code implementations • 4 Sep 2023 • Nicolas Zucchet, Seijin Kobayashi, Yassir Akram, Johannes von Oswald, Maxime Larcher, Angelika Steger, João Sacramento
In particular, we examine RNNs trained to solve simple in-context learning tasks on which Transformers are known to excel and find that gradient descent instills in our RNNs the same attention-based in-context learning algorithm used by Transformers.
1 code implementation • 14 Apr 2022 • Marc Kaufmann, Maxime Larcher, Johannes Lengler, Xun Zou
In this paper we disprove this conjecture and show that OneMax is not the easiest fitness landscape with respect to finding improving steps.
1 code implementation • 1 Apr 2022 • Marc Kaufmann, Maxime Larcher, Johannes Lengler, Xun Zou
Recently, Hevia Fajardo and Sudholt have shown that this setup with $c=1$ is efficient on \onemax for $s<1$, but inefficient if $s \ge 18$.
no code implementations • 3 Mar 2021 • Maxime Larcher, Anders Martinsson, Angelika Steger
Permutation Mastermind is a version of the classical mastermind game in which the number of positions $n$ is equal to the number of colors $k$, and repetition of colors is not allowed, neither in the codeword nor in the queries.
Combinatorics Probability