no code implementations • 17 Sep 2023 • Navid Hashemi, Xin Qin, Lars Lindemann, Jyotirmoy V. Deshmukh
We consider data-driven reachability analysis of discrete-time stochastic dynamical systems using conformal inference.
no code implementations • 12 Aug 2023 • Xin Qin, Navid Hashemi, Lars Lindemann, Jyotirmoy V. Deshmukh
Ultimately, conformance can capture distance between design models and their real implementations and thus aid in robust system design.
no code implementations • 24 Jul 2023 • Xinyi Yu, Xiang Yin, Lars Lindemann
Given an ATR bound, we compute a sequence of control inputs so that the specification is satisfied by the system as long as each sub-trajectory is shifted not more than the ATR bound.
no code implementations • 12 Jul 2023 • Farhad Mehdifar, Lars Lindemann, Charalampos P. Bechlioulis, Dimos V. Dimarogonas
This paper proposes a novel control framework for handling (potentially coupled) multiple time-varying output constraints for uncertain nonlinear systems.
no code implementations • 8 Jun 2023 • Alëna Rodionova, Lars Lindemann, Manfred Morari, George J. Pappas
Many modern autonomous systems, particularly multi-agent systems, are time-critical and need to be robust against timing uncertainties.
1 code implementation • 3 Apr 2023 • Matthew Cleaveland, Insup Lee, George J. Pappas, Lars Lindemann
In fact, to obtain prediction regions over $T$ time steps with confidence $1-\delta$, {previous works require that each individual prediction region is valid} with confidence $1-\delta/T$.
no code implementations • 1 Apr 2023 • Shuo Yang, George J. Pappas, Rahul Mangharam, Lars Lindemann
However, these perception maps are not perfect and result in state estimation errors that can lead to unsafe system behavior.
no code implementations • 2 Feb 2023 • Renukanandan Tumu, Lars Lindemann, Truong Nghiem, Rahul Mangharam
Predicting the motion of dynamic agents is a critical task for guaranteeing the safety of autonomous systems.
no code implementations • 3 Nov 2022 • Lars Lindemann, Xin Qin, Jyotirmoy V. Deshmukh, George J. Pappas
The second algorithm constructs prediction regions for future system states first, and uses these to obtain a prediction region for the satisfaction measure.
no code implementations • 26 Aug 2022 • Matthew Cleaveland, Lars Lindemann, Radoslav Ivanov, George Pappas
Motivated by the fragility of neural network (NN) controllers in safety-critical applications, we present a data-driven framework for verifying the risk of stochastic dynamical systems with NN controllers.
1 code implementation • 7 Jun 2022 • Anton Xue, Lars Lindemann, Rajeev Alur
Neural networks are central to many emerging technologies, but verifying their correctness remains a major challenge.
no code implementations • 28 May 2022 • Lars Lindemann, Lejun Jiang, Nikolai Matni, George J. Pappas
For discrete-time stochastic processes, we show under which conditions the approximate STL robustness risk can even be computed exactly.
no code implementations • 8 Apr 2022 • Sleiman Safaoui, Lars Lindemann, Iman Shames, Tyler H. Summers
Our control approach relies on reformulating these risk predicates as deterministic predicates over mean and covariance states of the system.
1 code implementation • 2 Apr 2022 • Anton Xue, Lars Lindemann, Alexander Robey, Hamed Hassani, George J. Pappas, Rajeev Alur
Lipschitz constants of neural networks allow for guarantees of robustness in image classification, safety in controller design, and generalizability beyond the training data.
no code implementations • 29 Mar 2022 • Alëna Rodionova, Lars Lindemann, Manfred Morari, George J. Pappas
We study the temporal robustness of temporal logic specifications and show how to design temporally robust control laws for time-critical control systems.
1 code implementation • 5 Feb 2022 • Lars Lindemann, Alena Rodionova, George J. Pappas
We then define the temporal robustness risk by investigating the temporal robustness of the realizations of a stochastic signal.
1 code implementation • 18 Nov 2021 • Lars Lindemann, Alexander Robey, Lejun Jiang, Stephen Tu, Nikolai Matni
We then present an optimization problem to learn ROCBFs from expert demonstrations that exhibit safe system behavior, e. g., data collected from a human operator.
no code implementations • 30 Aug 2021 • Lars Lindemann, George J. Pappas, Dimos V. Dimarogonas
Addressing these is pivotal to build fully autonomous systems and requires a systematic integration of planning and control.
1 code implementation • 6 Apr 2021 • Alena Rodionova, Lars Lindemann, Manfred Morari, George J. Pappas
We present a robust control framework for time-critical systems in which satisfying real-time constraints robustly is of utmost importance for the safety of the system.
no code implementations • 3 Apr 2021 • Lars Lindemann, Nikolai Matni, George J. Pappas
We then define the risk of a stochastic process not satisfying an STL formula robustly, referred to as the STL robustness risk.
no code implementations • 4 Feb 2021 • Lars Lindemann, Dimos V. Dimarogonas
Motivated by the recent interest in cyber-physical and autonomous robotic systems, we study the problem of dynamically coupled multi-agent systems under a set of signal temporal logic tasks.
1 code implementation • 16 Jan 2021 • Alexander Robey, Lars Lindemann, Stephen Tu, Nikolai Matni
We identify sufficient conditions on the data such that feasibility of the optimization problem ensures correctness of the learned robust hybrid control barrier functions.
no code implementations • 8 Nov 2020 • Lars Lindemann, Haimin Hu, Alexander Robey, Hanwen Zhang, Dimos V. Dimarogonas, Stephen Tu, Nikolai Matni
Motivated by the lack of systematic tools to obtain safe control laws for hybrid systems, we propose an optimization-based framework for learning certifiably safe control laws from data.
1 code implementation • 7 Apr 2020 • Alexander Robey, Haimin Hu, Lars Lindemann, Hanwen Zhang, Dimos V. Dimarogonas, Stephen Tu, Nikolai Matni
Furthermore, if the CBF parameterization is convex, then under mild assumptions, so is our learning process.