lilGym is a benchmark for language-conditioned reinforcement learning in visual environment based on 2,661 highly-compositional human-written natural language statements grounded in an interactive visual environment. Each statement is paired with multiple start states and reward functions to form thousands of distinct Markov Decision Processes of varying difficulty.
Source: lilGym: Natural Language Visual Reasoning with Reinforcement LearningPaper | Code | Results | Date | Stars |
---|