Paper

WFA-IRL: Inverse Reinforcement Learning of Autonomous Behaviors Encoded as Weighted Finite Automata

This paper presents a method for learning logical task specifications and cost functions from demonstrations. Constructing specifications by hand is challenging for complex objectives and constraints in autonomous systems. Instead, we consider demonstrated task executions, whose logic structure and transition costs need to be inferred by an autonomous agent. We employ a spectral learning approach to extract a weighted finite automaton (WFA), approximating the unknown task logic. Thereafter, we define a product between the WFA for high-level task guidance and a labeled Markov decision process for low-level control. An inverse reinforcement learning (IRL) problem is considered to learn a cost function by backpropagating the loss between agent and expert behaviors through the planning algorithm. Our proposed model, termed WFA-IRL, is capable of generalizing the execution of the inferred task specification in a suite of MiniGrid environments.

Results in Papers With Code
(↓ scroll down to see all results)