Search Results for author: Maxime Larcher

Found 5 papers, 3 papers with code

Hardest Monotone Functions for Evolutionary Algorithms

1 code implementation13 Nov 2023 Marc Kaufmann, Maxime Larcher, Johannes Lengler, Oliver Sieberling

Recently, Kaufmann, Larcher, Lengler and Zou conjectured that for the self-adjusting $(1,\lambda)$-EA, Adversarial Dynamic BinVal (ADBV) is the hardest dynamic monotone function to optimize.

Evolutionary Algorithms

Gated recurrent neural networks discover attention

no code implementations4 Sep 2023 Nicolas Zucchet, Seijin Kobayashi, Yassir Akram, Johannes von Oswald, Maxime Larcher, Angelika Steger, João Sacramento

In particular, we examine RNNs trained to solve simple in-context learning tasks on which Transformers are known to excel and find that gradient descent instills in our RNNs the same attention-based in-context learning algorithm used by Transformers.

In-Context Learning

OneMax is not the Easiest Function for Fitness Improvements

1 code implementation14 Apr 2022 Marc Kaufmann, Maxime Larcher, Johannes Lengler, Xun Zou

In this paper we disprove this conjecture and show that OneMax is not the easiest fitness landscape with respect to finding improving steps.

Self-adjusting Population Sizes for the $(1, λ)$-EA on Monotone Functions

1 code implementation1 Apr 2022 Marc Kaufmann, Maxime Larcher, Johannes Lengler, Xun Zou

Recently, Hevia Fajardo and Sudholt have shown that this setup with $c=1$ is efficient on \onemax for $s<1$, but inefficient if $s \ge 18$.

Solving Static Permutation Mastermind using $O(n \log n)$ Queries

no code implementations3 Mar 2021 Maxime Larcher, Anders Martinsson, Angelika Steger

Permutation Mastermind is a version of the classical mastermind game in which the number of positions $n$ is equal to the number of colors $k$, and repetition of colors is not allowed, neither in the codeword nor in the queries.

Combinatorics Probability

Cannot find the paper you are looking for? You can Submit a new open access paper.