Search Results for author: Robert Loftin

Found 10 papers, 2 papers with code

AI4GCC - Team: Below Sea Level: Critiques and Improvements

no code implementations26 Jul 2023 Bram Renting, Phillip Wozny, Robert Loftin, Claudia Wieners, Erman Acar

We present a critical analysis of the simulation framework RICE-N, an integrated assessment model (IAM) for evaluating the impacts of climate change on the economy.

AI4GCC-Team -- Below Sea Level: Score and Real World Relevance

no code implementations26 Jul 2023 Phillip Wozny, Bram Renting, Robert Loftin, Claudia Wieners, Erman Acar

As our submission for track three of the AI for Global Climate Cooperation (AI4GCC) competition, we propose a negotiation protocol for use in the RICE-N climate-economic simulation.

Towards a Unifying Model of Rationality in Multiagent Systems

no code implementations29 May 2023 Robert Loftin, Mustafa Mert Çelikok, Frans A. Oliehoek

Multiagent systems deployed in the real world need to cooperate with other agents (including humans) nearly as effectively as these agents cooperate with one another.

Uncoupled Learning of Differential Stackelberg Equilibria with Commitments

no code implementations7 Feb 2023 Robert Loftin, Mustafa Mert Çelikok, Herke van Hoof, Samuel Kaski, Frans A. Oliehoek

A natural solution concept for many multiagent settings is the Stackelberg equilibrium, under which a ``leader'' agent selects a strategy that maximizes its own payoff assuming the ``follower'' chooses their best response to this strategy.

Multi-agent Reinforcement Learning

On the Impossibility of Learning to Cooperate with Adaptive Partner Strategies in Repeated Games

no code implementations20 Jun 2022 Robert Loftin, Frans A. Oliehoek

Learning to cooperate with other agents is challenging when those agents also possess the ability to adapt to our own behavior.

Strategically Efficient Exploration in Competitive Multi-agent Reinforcement Learning

1 code implementation30 Jul 2021 Robert Loftin, Aadirupa Saha, Sam Devlin, Katja Hofmann

High sample complexity remains a barrier to the application of reinforcement learning (RL), particularly in multi-agent systems.

Efficient Exploration Multi-agent Reinforcement Learning +2

Better Exploration with Optimistic Actor Critic

1 code implementation NeurIPS 2019 Kamil Ciosek, Quan Vuong, Robert Loftin, Katja Hofmann

To address both of these phenomena, we introduce a new algorithm, Optimistic Actor Critic, which approximates a lower and upper confidence bound on the state-action value function.

Continuous Control Efficient Exploration

Better Exploration with Optimistic Actor-Critic

no code implementations28 Oct 2019 Kamil Ciosek, Quan Vuong, Robert Loftin, Katja Hofmann

To address both of these phenomena, we introduce a new algorithm, Optimistic Actor Critic, which approximates a lower and upper confidence bound on the state-action value function.

Continuous Control Efficient Exploration

Interactive Learning of Environment Dynamics for Sequential Tasks

no code implementations19 Jul 2019 Robert Loftin, Bei Peng, Matthew E. Taylor, Michael L. Littman, David L. Roberts

In order for robots and other artificial agents to efficiently learn to perform useful tasks defined by an end user, they must understand not only the goals of those tasks, but also the structure and dynamics of that user's environment.

Interactive Learning from Policy-Dependent Human Feedback

no code implementations ICML 2017 James MacGlashan, Mark K. Ho, Robert Loftin, Bei Peng, Guan Wang, David Roberts, Matthew E. Taylor, Michael L. Littman

This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback.

Cannot find the paper you are looking for? You can Submit a new open access paper.