Learning Lower Bounds for Graph Exploration With Reinforcement Learning

We explore the usage of reinforcement learning for theoretical computer science. Reinforcement learning has shown to find solutions in challenging domains such as Chess or Go. Theoretical problems, such as finding the worst possible input for an algorithm come with even more vast, combinatorial search spaces. In this paper, we look at the example of online graph exploration. Here we want to find graphs that yield a high competitive ratio for a greedy explorer. The search space consists of having every edge being either present or absent. Given there are quadratically many possible edges in a graph and each subset of edges is a possible solution, this yields unfeasibly large search spaces even for few nodes. We show experimentally how clever constraints can keep such search spaces manageable. As a result, we can learn graphs that resemble those known from literature and even improve them to yield higher competitive ratios.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here