Search Results for author: Radoslav Ivanov

Found 7 papers, 2 papers with code

Data-Driven Modeling and Verification of Perception-Based Autonomous Systems

no code implementations11 Dec 2023 Thomas Waite, Alexander Robey, Hassani Hamed, George J. Pappas, Radoslav Ivanov

This paper addresses the problem of data-driven modeling and verification of perception-based autonomous systems.

Navigate

Credal Bayesian Deep Learning

no code implementations19 Feb 2023 Michele Caprio, Souradeep Dutta, Kuk Jin Jang, Vivian Lin, Radoslav Ivanov, Oleg Sokolsky, Insup Lee

We show that CBDL is better at quantifying and disentangling different types of uncertainties than single BNNs, ensemble of BNNs, and Bayesian Model Averaging.

Autonomous Driving motion prediction +1

Risk Verification of Stochastic Systems with Neural Network Controllers

no code implementations26 Aug 2022 Matthew Cleaveland, Lars Lindemann, Radoslav Ivanov, George Pappas

Motivated by the fragility of neural network (NN) controllers in safety-critical applications, we present a data-driven framework for verifying the risk of stochastic dynamical systems with NN controllers.

Confidence Composition for Monitors of Verification Assumptions

1 code implementation3 Nov 2021 Ivan Ruchkin, Matthew Cleaveland, Radoslav Ivanov, Pengyuan Lu, Taylor Carpenter, Oleg Sokolsky, Insup Lee

To predict safety violations in a verified system, we propose a three-step confidence composition (CoCo) framework for monitoring verification assumptions.

ModelGuard: Runtime Validation of Lipschitz-continuous Models

no code implementations30 Apr 2021 Taylor J. Carpenter, Radoslav Ivanov, Insup Lee, James Weimer

This paper presents ModelGuard, a sampling-based approach to runtime model validation for Lipschitz-continuous models.

Confidence Calibration with Bounded Error Using Transformations

no code implementations25 Feb 2021 Sooyong Jang, Radoslav Ivanov, Insup Lee, James Weimer

As machine learning techniques become widely adopted in new domains, especially in safety-critical systems such as autonomous vehicles, it is crucial to provide accurate output uncertainty estimation.

Autonomous Vehicles

Verisig: verifying safety properties of hybrid systems with neural network controllers

1 code implementation5 Nov 2018 Radoslav Ivanov, James Weimer, Rajeev Alur, George J. Pappas, Insup Lee

This paper presents Verisig, a hybrid system approach to verifying safety properties of closed-loop systems using neural networks as controllers.

Systems and Control

Cannot find the paper you are looking for? You can Submit a new open access paper.