Search Results for author: Lars Lindemann

Found 14 papers, 7 papers with code

Parametric Chordal Sparsity for SDP-based Neural Network Verification

1 code implementation7 Jun 2022 Anton Xue, Lars Lindemann, Rajeev Alur

Many future technologies rely on neural networks, but verifying the correctness of their behavior remains a major challenge.

Risk of Stochastic Systems for Temporal Logic Specifications

no code implementations28 May 2022 Lars Lindemann, Lejun Jiang, Nikolai Matni, George J. Pappas

For discrete-time stochastic processes, we show under which conditions the approximate STL robustness risk can even be computed exactly.

Autonomous Driving

Risk-Bounded Temporal Logic Control of Continuous-Time Stochastic Systems

no code implementations8 Apr 2022 Sleiman Safaoui, Lars Lindemann, Iman Shames, Tyler H. Summers

Our control approach relies on reformulating these risk predicates as deterministic predicates over mean and covariance states of the system.

Chordal Sparsity for Lipschitz Constant Estimation of Deep Neural Networks

1 code implementation2 Apr 2022 Anton Xue, Lars Lindemann, Alexander Robey, Hamed Hassani, George J. Pappas, Rajeev Alur

Lipschitz constants of neural networks allow for guarantees of robustness in image classification, safety in controller design, and generalizability beyond the training data.

Image Classification Navigate

Temporal Robustness of Temporal Logic Specifications: Analysis and Control Design

no code implementations29 Mar 2022 Alëna Rodionova, Lars Lindemann, Manfred Morari, George J. Pappas

We study the temporal robustness of temporal logic specifications and show how to design temporally robust control laws for time-critical control systems.

Temporal Robustness of Stochastic Signals

1 code implementation5 Feb 2022 Lars Lindemann, Alena Rodionova, George J. Pappas

We then define the temporal robustness risk by investigating the temporal robustness of the realizations of a stochastic signal.

Autonomous Driving

Learning Robust Output Control Barrier Functions from Safe Expert Demonstrations

1 code implementation18 Nov 2021 Lars Lindemann, Alexander Robey, Lejun Jiang, Stephen Tu, Nikolai Matni

We then present an optimization problem to learn ROCBFs from expert demonstrations that exhibit safe system behavior, e. g., data collected from a human operator.

Autonomous Driving

Reactive and Risk-Aware Control for Signal Temporal Logic

no code implementations30 Aug 2021 Lars Lindemann, George J. Pappas, Dimos V. Dimarogonas

Addressing these is pivotal to build fully autonomous systems and requires a systematic integration of planning and control.

Time-Robust Control for STL Specifications

1 code implementation6 Apr 2021 Alena Rodionova, Lars Lindemann, Manfred Morari, George J. Pappas

We present a robust control framework for time-critical systems in which satisfying real-time constraints robustly is of utmost importance for the safety of the system.

STL Robustness Risk over Discrete-Time Stochastic Processes

no code implementations3 Apr 2021 Lars Lindemann, Nikolai Matni, George J. Pappas

We then define the risk of a stochastic process not satisfying an STL formula robustly, referred to as the STL robustness risk.

Barrier Function-based Collaborative Control of Multiple Robots under Signal Temporal Logic Tasks

no code implementations4 Feb 2021 Lars Lindemann, Dimos V. Dimarogonas

Motivated by the recent interest in cyber-physical and autonomous robotic systems, we study the problem of dynamically coupled multi-agent systems under a set of signal temporal logic tasks.

Learning Robust Hybrid Control Barrier Functions for Uncertain Systems

1 code implementation16 Jan 2021 Alexander Robey, Lars Lindemann, Stephen Tu, Nikolai Matni

We identify sufficient conditions on the data such that feasibility of the optimization problem ensures correctness of the learned robust hybrid control barrier functions.

Learning Hybrid Control Barrier Functions from Data

no code implementations8 Nov 2020 Lars Lindemann, Haimin Hu, Alexander Robey, Hanwen Zhang, Dimos V. Dimarogonas, Stephen Tu, Nikolai Matni

Motivated by the lack of systematic tools to obtain safe control laws for hybrid systems, we propose an optimization-based framework for learning certifiably safe control laws from data.

Learning Control Barrier Functions from Expert Demonstrations

1 code implementation7 Apr 2020 Alexander Robey, Haimin Hu, Lars Lindemann, Hanwen Zhang, Dimos V. Dimarogonas, Stephen Tu, Nikolai Matni

Furthermore, if the CBF parameterization is convex, then under mild assumptions, so is our learning process.

Cannot find the paper you are looking for? You can Submit a new open access paper.