1 code implementation • 12 Dec 2023 • Renukanandan Tumu, Matthew Cleaveland, Rahul Mangharam, George J. Pappas, Lars Lindemann
However, little work has gone into finding non-conformity score functions that produce prediction regions that are multi-modal and practical, i. e., that can efficiently be used in engineering applications.
1 code implementation • 6 Nov 2023 • Pengyuan Lu, Matthew Cleaveland, Oleg Sokolsky, Insup Lee, Ivan Ruchkin
However, existing repair techniques do not preserve previously correct behaviors.
no code implementations • 28 Aug 2023 • Souradeep Dutta, Michele Caprio, Vivian Lin, Matthew Cleaveland, Kuk Jin Jang, Ivan Ruchkin, Oleg Sokolsky, Insup Lee
A particularly challenging problem in AI safety is providing guarantees on the behavior of high-dimensional autonomous systems.
no code implementations • 6 Apr 2023 • Pengyuan Lu, Ivan Ruchkin, Matthew Cleaveland, Oleg Sokolsky, Insup Lee
However, given the high diversity and complexity of LECs, it is challenging to encode domain knowledge (e. g., the CPS dynamics) in a scalable actual causality model that could generate useful repair suggestions.
1 code implementation • 3 Apr 2023 • Matthew Cleaveland, Insup Lee, George J. Pappas, Lars Lindemann
In fact, to obtain prediction regions over $T$ time steps with confidence $1-\delta$, {previous works require that each individual prediction region is valid} with confidence $1-\delta/T$.
no code implementations • 26 Aug 2022 • Matthew Cleaveland, Lars Lindemann, Radoslav Ivanov, George Pappas
Motivated by the fragility of neural network (NN) controllers in safety-critical applications, we present a data-driven framework for verifying the risk of stochastic dynamical systems with NN controllers.
1 code implementation • 3 Nov 2021 • Ivan Ruchkin, Matthew Cleaveland, Radoslav Ivanov, Pengyuan Lu, Taylor Carpenter, Oleg Sokolsky, Insup Lee
To predict safety violations in a verified system, we propose a three-step confidence composition (CoCo) framework for monitoring verification assumptions.