no code implementations • 7 Mar 2024 • Elizaveta Tennant, Stephen Hailes, Mirco Musolesi
In multi-agent (social) environments, complex population-level phenomena may emerge from interactions between individual learning agents.
no code implementations • 4 Dec 2023 • Elizaveta Tennant, Stephen Hailes, Mirco Musolesi
In particular, we present three case studies of recent works which use learning from experience (i. e., Reinforcement Learning) to explicitly provide moral principles to learning agents - either as intrinsic rewards, moral logical constraints or textual principles for language models.
2 code implementations • 20 Jan 2023 • Elizaveta Tennant, Stephen Hailes, Mirco Musolesi
In particular, we believe that an interesting and insightful starting point is the analysis of emergent behavior of Reinforcement Learning (RL) agents that act according to a predefined set of moral rewards in social dilemmas.
Multi-agent Reinforcement Learning reinforcement-learning +1