For discrete-time stochastic processes, we show under which conditions the approximate STL robustness risk can even be computed exactly.
Our control approach relies on reformulating these risk predicates as deterministic predicates over mean and covariance states of the system.
Lipschitz constants of neural networks allow for guarantees of robustness in image classification, safety in controller design, and generalizability beyond the training data.
We study the temporal robustness of temporal logic specifications and show how to design temporally robust control laws for time-critical control systems.
We then present an optimization problem to learn ROCBFs from expert demonstrations that exhibit safe system behavior, e. g., data collected from a human operator.
We present a robust control framework for time-critical systems in which satisfying real-time constraints robustly is of utmost importance for the safety of the system.
Motivated by the recent interest in cyber-physical and autonomous robotic systems, we study the problem of dynamically coupled multi-agent systems under a set of signal temporal logic tasks.
We identify sufficient conditions on the data such that feasibility of the optimization problem ensures correctness of the learned robust hybrid control barrier functions.
Motivated by the lack of systematic tools to obtain safe control laws for hybrid systems, we propose an optimization-based framework for learning certifiably safe control laws from data.
Furthermore, if the CBF parameterization is convex, then under mild assumptions, so is our learning process.