no code implementations • 23 Jun 2023 • Yash Paliwal, Rajarshi Roy, Jean-Raphaël Gaglione, Nasim Baharisangari, Daniel Neider, Xiaoming Duan, Ufuk Topcu, Zhe Xu
We study a class of reinforcement learning (RL) tasks where the objective of the agent is to accomplish temporally extended goals.
no code implementations • 16 Jun 2023 • Zeyuan Jin, Nasim Baharisangari, Zhe Xu, Sze Zheng Yong
To tackle this problem, we propose data-driven methods to over-approximate the unknown dynamics and to infer the unknown specifications such that both set-membership models of the unknown dynamics and LTL formulas are guaranteed to include the ground truth model and specification/task.
no code implementations • 2 Dec 2022 • Jean-Raphaël Gaglione, Rajarshi Roy, Nasim Baharisangari, Daniel Neider, Zhe Xu, Ufuk Topcu
Learning linear temporal logic (LTL) formulas from examples labeled as positive or negative has found applications in inferring descriptions of system behavior.
no code implementations • 4 Oct 2022 • Nasim Baharisangari, Zhe Xu
In this paper, we propose a distributed differentially private receding horizon control (RHC) approach for multi-agent systems (MAS) with metric temporal logic (MTL) specifications.
1 code implementation • 6 Sep 2022 • Rajarshi Roy, Jean-Raphaël Gaglione, Nasim Baharisangari, Daniel Neider, Zhe Xu, Ufuk Topcu
To learn meaningful models from positive examples only, we design algorithms that rely on conciseness and language minimality of models as regularizers.
no code implementations • 16 Sep 2021 • Nasim Baharisangari, Kazuma Hirota, Ruixuan Yan, Agung Julius, Zhe Xu
It is important that the obtained knowledge is human-interpretable and amenable to formal analysis.
1 code implementation • 24 May 2021 • Nasim Baharisangari, Jean-Raphaël Gaglione, Daniel Neider, Ufuk Topcu, Zhe Xu
In this paper, we first investigate the uncertainties associated with trajectories of a system and represent such uncertainties in the form of interval trajectories.