Reactive learning strategies for iterated games

11 Mar 2019  ·  McAvoy Alex, Nowak Martin A. ·

In an iterated game between two players, there is much interest in characterizing the set of feasible payoffs for both players when one player uses a fixed strategy and the other player is free to switch. Such characterizations have led to extortionists, equalizers, partners, and rivals. Most of those studies use memory-one strategies, which specify the probabilities to take actions depending on the outcome of the previous round. Here, we consider "reactive learning strategies," which gradually modify their propensity to take certain actions based on past actions of the opponent. Every linear reactive learning strategy, $\mathbf{p}^{\ast}$, corresponds to a memory one-strategy, $\mathbf{p}$, and vice versa. We prove that for evaluating the region of feasible payoffs against a memory-one strategy, $\mathcal{C}\left(\mathbf{p}\right)$, we need to check its performance against at most $11$ other strategies. Thus, $\mathcal{C}\left(\mathbf{p}\right)$ is the convex hull in $\mathbb{R}^{2}$ of at most $11$ points. Furthermore, if $\mathbf{p}$ is a memory-one strategy, with feasible payoff region $\mathcal{C}\left(\mathbf{p}\right)$, and $\mathbf{p}^{\ast}$ is the corresponding reactive learning strategy, with feasible payoff region $\mathcal{C}\left(\mathbf{p}^{\ast}\right)$, then $\mathcal{C}\left(\mathbf{p}^{\ast}\right)$ is a subset of $\mathcal{C}\left(\mathbf{p}\right)$. Reactive learning strategies are therefore powerful tools in restricting the outcomes of iterated games.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here