Model-Based Reinforcement Learning for Approximate Optimal Control with Temporal Logic Specifications

18 Jan 2021  ·  Max Cohen, Calin Belta ·

In this paper we study the problem of synthesizing optimal control policies for uncertain continuous-time nonlinear systems from syntactically co-safe linear temporal logic (scLTL) formulas. We formulate this problem as a sequence of reach-avoid optimal control sub-problems. We show that the resulting hybrid optimal control policy guarantees the satisfaction of a given scLTL formula by constructing a barrier certificate. Since solving each optimal control problem may be computationally intractable, we take a learning-based approach to approximately solve this sequence of optimal control problems online without requiring full knowledge of the system dynamics. Using Lyapunov-based tools, we develop sufficient conditions under which our approximate solution maintains correctness. Finally, we demonstrate the efficacy of the developed method with a numerical example.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here