Incentivising cooperation by rewarding the weakest member

4 Oct 2022  ·  Jory Schossau, Bamshad Shirmohammadi, Arend Hintze ·

Autonomous agents that act with each other on behalf of humans are becoming more common in many social domains, such as customer service, transportation, and health care. In such social situations greedy strategies can reduce the positive outcome for all agents, such as leading to stop-and-go traffic on highways, or causing a denial of service on a communications channel. Instead, we desire autonomous decision-making for efficient performance while also considering equitability of the group to avoid these pitfalls. Unfortunately, in complex situations it is far easier to design machine learning objectives for selfish strategies than for equitable behaviors. Here we present a simple way to reward groups of agents in both evolution and reinforcement learning domains by the performance of their weakest member. We show how this yields ``fairer'' more equitable behavior, while also maximizing individual outcomes, and we show the relationship to biological selection mechanisms of group-level selection and inclusive fitness theory.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here