A Hybrid Neuro-Symbolic approach for Text-Based Games using Inductive Logic Programming

Text-based games (TBGs) have emerged as an important test-bed, requiring reinforcement learning (RL) agents to combine natural language understanding with reasoning. A key challenge for agents solving this task is to generalize across multiple games and shows good results on both seen and unseen objects. Currently, pure deep learning-based RL systems can perform well to known entities and states. They, however, perform poorly in novel situations e.g., when handling out-of-vocabulary (OOV) objects. In the perspective of generalization, recent efforts in infusing external commonsense knowledge into an RL agent show better results than pure deep-learning systems. However, the policies learned by these systems are not interpretable or easily transferable. To tackle these issues, we have designed a hybrid neuro-symbolic framework for TBGs that uses symbolic reasoning along with the neural RL model. It employs inductive logic programming (ILP) to learn the symbolic rules (policies) as default theory with exceptions and is represented in the form of an answer-set-program (ASP) that allows performing non-monotonic reasoning in the partially observable game environment. We use WordNet as an external knowledge source to lift the learned rules to their generalized versions. These rules are learned in an online manner and applied with an ASP solver to predict an action for the agent. We show that the agents that incorporate the neuro-symbolic hybrid approach with the generalized rules outperform the baseline agents.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here