no code implementations • 11 Dec 2023 • Thomas Waite, Alexander Robey, Hassani Hamed, George J. Pappas, Radoslav Ivanov
This paper addresses the problem of data-driven modeling and verification of perception-based autonomous systems.
no code implementations • 19 Feb 2023 • Michele Caprio, Souradeep Dutta, Kuk Jin Jang, Vivian Lin, Radoslav Ivanov, Oleg Sokolsky, Insup Lee
We show that CBDL is better at quantifying and disentangling different types of uncertainties than single BNNs, ensemble of BNNs, and Bayesian Model Averaging.
no code implementations • 26 Aug 2022 • Matthew Cleaveland, Lars Lindemann, Radoslav Ivanov, George Pappas
Motivated by the fragility of neural network (NN) controllers in safety-critical applications, we present a data-driven framework for verifying the risk of stochastic dynamical systems with NN controllers.
1 code implementation • 3 Nov 2021 • Ivan Ruchkin, Matthew Cleaveland, Radoslav Ivanov, Pengyuan Lu, Taylor Carpenter, Oleg Sokolsky, Insup Lee
To predict safety violations in a verified system, we propose a three-step confidence composition (CoCo) framework for monitoring verification assumptions.
no code implementations • 30 Apr 2021 • Taylor J. Carpenter, Radoslav Ivanov, Insup Lee, James Weimer
This paper presents ModelGuard, a sampling-based approach to runtime model validation for Lipschitz-continuous models.
no code implementations • 25 Feb 2021 • Sooyong Jang, Radoslav Ivanov, Insup Lee, James Weimer
As machine learning techniques become widely adopted in new domains, especially in safety-critical systems such as autonomous vehicles, it is crucial to provide accurate output uncertainty estimation.
1 code implementation • 5 Nov 2018 • Radoslav Ivanov, James Weimer, Rajeev Alur, George J. Pappas, Insup Lee
This paper presents Verisig, a hybrid system approach to verifying safety properties of closed-loop systems using neural networks as controllers.
Systems and Control