Informational Design of Dynamic Multi-Agent System

7 May 2021  ·  Tao Zhang, Quanyan Zhu ·

This work considers a novel information design problem and studies how the craft of payoff-relevant environmental signals solely can influence the behaviors of intelligent agents. The agents' strategic interactions are captured by a Markov game, in which each agent first selects one external signal from multiple signal sources as additional payoff-relevant information and then takes an action. There is a rational information designer (principal) who possesses one signal source and aims to influence the equilibrium behaviors of the agents by designing the information structure of her signals sent to the agents. We propose a direct information design approach that incentivizes each agent to select the signal sent by the principal, such that the design process avoids the predictions of the agents' strategic selection behaviors. We then introduce the design protocol given a goal of the designer which we refer to as obedient implementability (OIL) and characterize the OIL in a class of obedient sequential Markov perfect equilibria (O-SMPE). A design regime is proposed based on an approach which we refer to as the fixed-point alignment that incentivizes the agents to choose the signal sent by the principal, guarantees that the agents' policy profile of taking actions is the policy component of an O-SMPE and the principal's goal is achieved. We then formulate the principal's optimal goal selection problem in terms of information design and characterize the optimization problem by minimizing the fixed-point misalignments. The proposed approach can be applied to elicit desired behaviors of multi-agent systems in competing as well as cooperating settings and be extended to heterogeneous stochastic games in the complete- and the incomplete-information environments.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here