Search Results for author: Edward Hughes

Found 35 papers, 13 papers with code

Neural Contextual Reinforcement Framework for Logical Structure Language Generation

no code implementations20 Jan 2025 Marcus Irvin, William Cooper, Edward Hughes, Jessica Morgan, Christopher Hamilton

The Neural Contextual Reinforcement Framework introduces an innovative approach to enhancing the logical coherence and structural consistency of text generated by large language models.

Text Generation

Cultural Evolution of Cooperation among LLM Agents

1 code implementation13 Dec 2024 Aron Vallinder, Edward Hughes

Further, Claude 3. 5 Sonnet can make use of an additional mechanism for costly punishment to achieve yet higher scores, while Gemini 1. 5 Flash and GPT-4o fail to do so.

Open-Endedness is Essential for Artificial Superhuman Intelligence

no code implementations6 Jun 2024 Edward Hughes, Michael Dennis, Jack Parker-Holder, Feryal Behbahani, Aditi Mavalankar, Yuge Shi, Tom Schaul, Tim Rocktaschel

In this position paper, we argue that the ingredients are now in place to achieve openendedness in AI systems with respect to a human observer.

Towards a Pretrained Model for Restless Bandits via Multi-arm Generalization

no code implementations23 Oct 2023 Yunfan Zhao, Nikhil Behari, Edward Hughes, Edwin Zhang, Dheeraj Nagaraj, Karl Tuyls, Aparna Taneja, Milind Tambe

Restless multi-arm bandits (RMABs), a class of resource allocation problems with broad application in areas such as healthcare, online advertising, and anti-poaching, have recently been studied from a multi-agent reinforcement learning perspective.

Multi-agent Reinforcement Learning Multi-Armed Bandits +1

Developing, Evaluating and Scaling Learning Agents in Multi-Agent Environments

no code implementations22 Sep 2022 Ian Gemp, Thomas Anthony, Yoram Bachrach, Avishkar Bhoopchand, Kalesha Bullard, Jerome Connor, Vibhavari Dasagi, Bart De Vylder, Edgar Duenez-Guzman, Romuald Elie, Richard Everett, Daniel Hennes, Edward Hughes, Mina Khan, Marc Lanctot, Kate Larson, Guy Lever, SiQi Liu, Luke Marris, Kevin R. McKee, Paul Muller, Julien Perolat, Florian Strub, Andrea Tacchetti, Eugene Tarassov, Zhe Wang, Karl Tuyls

The Game Theory & Multi-Agent team at DeepMind studies several aspects of multi-agent learning ranging from computing approximations to fundamental concepts in game theory to simulating social dilemmas in rich spatial environments and training 3-d humanoids in difficult team coordination tasks.

Deep Reinforcement Learning reinforcement-learning +1

Collaborating with Humans without Human Data

1 code implementation NeurIPS 2021 DJ Strouse, Kevin R. McKee, Matt Botvinick, Edward Hughes, Richard Everett

Here, we study the problem of how to train agents that collaborate well with human partners without using human data.

Multi-agent Reinforcement Learning

Open Problems in Cooperative AI

no code implementations15 Dec 2020 Allan Dafoe, Edward Hughes, Yoram Bachrach, Tantum Collins, Kevin R. McKee, Joel Z. Leibo, Kate Larson, Thore Graepel

We see opportunity to more explicitly focus on the problem of cooperation, to construct unified theory and vocabulary, and to build bridges with adjacent communities working on cooperation, including in the natural, social, and behavioural sciences.

Scheduling

Learning to Incentivize Other Learning Agents

2 code implementations NeurIPS 2020 Jiachen Yang, Ang Li, Mehrdad Farajtabar, Peter Sunehag, Edward Hughes, Hongyuan Zha

The challenge of developing powerful and general Reinforcement Learning (RL) agents has received increasing attention in recent years.

General Reinforcement Learning Reinforcement Learning (RL)

Social diversity and social preferences in mixed-motive reinforcement learning

no code implementations6 Feb 2020 Kevin R. McKee, Ian Gemp, Brian McWilliams, Edgar A. Duéñez-Guzmán, Edward Hughes, Joel Z. Leibo

Recent research on reinforcement learning in pure-conflict and pure-common interest games has emphasized the importance of population heterogeneity.

Diversity reinforcement-learning +2

Intrinsic Social Motivation via Causal Influence in Multi-Agent RL

no code implementations ICLR 2019 Natasha Jaques, Angeliki Lazaridou, Edward Hughes, Caglar Gulcehre, Pedro A. Ortega, DJ Strouse, Joel Z. Leibo, Nando de Freitas

Therefore, we also employ influence to train agents to use an explicit communication channel, and find that it leads to more effective communication and higher collective reward.

counterfactual Counterfactual Reasoning +3

Learning Reciprocity in Complex Sequential Social Dilemmas

no code implementations19 Mar 2019 Tom Eccles, Edward Hughes, János Kramár, Steven Wheelwright, Joel Z. Leibo

We analyse the resulting policies to show that the reciprocating agents are strongly influenced by their co-players' behavior.

Reinforcement Learning

Malthusian Reinforcement Learning

no code implementations17 Dec 2018 Joel Z. Leibo, Julien Perolat, Edward Hughes, Steven Wheelwright, Adam H. Marblestone, Edgar Duéñez-Guzmán, Peter Sunehag, Iain Dunning, Thore Graepel

Here we explore a new algorithmic framework for multi-agent reinforcement learning, called Malthusian reinforcement learning, which extends self-play to include fitness-linked population size dynamics that drive ongoing innovation.

Multi-agent Reinforcement Learning reinforcement-learning +2

Bayesian Action Decoder for Deep Multi-Agent Reinforcement Learning

1 code implementation4 Nov 2018 Jakob N. Foerster, Francis Song, Edward Hughes, Neil Burch, Iain Dunning, Shimon Whiteson, Matthew Botvinick, Michael Bowling

We present the Bayesian action decoder (BAD), a new multi-agent learning method that uses an approximate Bayesian update to obtain a public belief that conditions on the actions taken by all agents in the environment.

Decoder Multi-agent Reinforcement Learning +4

Social Influence as Intrinsic Motivation for Multi-Agent Deep Reinforcement Learning

3 code implementations ICLR 2019 Natasha Jaques, Angeliki Lazaridou, Edward Hughes, Caglar Gulcehre, Pedro A. Ortega, DJ Strouse, Joel Z. Leibo, Nando de Freitas

We propose a unified mechanism for achieving coordination and communication in Multi-Agent Reinforcement Learning (MARL), through rewarding agents for having causal influence over other agents' actions.

counterfactual Counterfactual Reasoning +4

Learning to Understand Goal Specifications by Modelling Reward

1 code implementation ICLR 2019 Dzmitry Bahdanau, Felix Hill, Jan Leike, Edward Hughes, Arian Hosseini, Pushmeet Kohli, Edward Grefenstette

Recent work has shown that deep reinforcement-learning agents can learn to follow language-like instructions from infrequent environment rewards.

Deep Reinforcement Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.