no code implementations • 30 Aug 2022 • Bettina Könighofer, Roderick Bloem, Rüdiger Ehlers, Christian Pek
In this paper, we are interested in techniques for constructing runtime enforcers for the concrete application domain of enforcing safety in AI.
no code implementations • 27 Jan 2021 • Ingy Elsayed-Aly, Suda Bharadwaj, Christopher Amato, Rüdiger Ehlers, Ufuk Topcu, Lu Feng
Multi-agent reinforcement learning (MARL) has been increasingly used in a wide range of safety-critical applications, which require guaranteed safety (e. g., no unsafe states are ever visited) during the learning process. Unfortunately, current MARL methods do not have safety guarantees.
Multi-agent Reinforcement Learning reinforcement-learning +1