Learning to Infer Unseen Contexts in Causal Contextual Reinforcement Learning

In Contextual Reinforcement Learning (CRL), a change in the context variable can cause a change in the distribution of the states. Hence contextual agents must be able to learn adaptive policies that can change when a context changes. Furthermore, in certain scenarios agents have to deal with unseen contexts, and be able to choose suitable actions. In order to generalise onto unseen contexts, agents need to not only detect and adapt to previously observed contexts, but also reason about how a context is constructed, and what are the causal factors of context variables. In this paper, we propose a new task and environment for Causal Contextual Reinforcement Learning (CCRL), where the performance of different agents can be compared in a causal reasoning task. Furthermore, we introduce a Contextual Attention Module that allows the agent to incorporate disentangled features as the contextual factors, which results in performance improvement of the agent in unseen contexts. Finally, we demonstrate that non-causal agents fail to generalise onto unseen contexts, while the agents incorporating the proposed module can achieve better performance in unseen contexts.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here