Set-Invariant Constrained Reinforcement Learning with a Meta-Optimizer

19 Jun 2020Chuangchuang SunDong-Ki KimJonathan P. How

This paper investigates reinforcement learning with safety constraints. To drive the constraint violation monotonically decrease, the constraints are taken as Lyapunov functions, and new linear constraints are imposed on the updating dynamics of the policy parameters such that the original safety set is forward-invariant in expectation... (read more)

PDF Abstract

Code


No code implementations yet. Submit your code now

Tasks


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.