no code implementations • 9 Oct 2023 • Kaustubh Sridhar, Souradeep Dutta, Dinesh Jayaraman, James Weimer, Insup Lee
Imitation learning considerably simplifies policy synthesis compared to alternative approaches by exploiting access to expert demonstrations.
no code implementations • 1 Sep 2023 • Sydney Pugh, Ivan Ruchkin, Insup Lee, James Weimer
However, ensuring the robustness of these models is vital for building trustworthy AI systems.
no code implementations • 27 Apr 2023 • Ramneet Kaur, Yiannis Kantaros, Wenwen Si, James Weimer, Insup Lee
Nevertheless, DNN models have proven to be vulnerable to adversarial digital and physical attacks.
1 code implementation • 2 Dec 2022 • Kaustubh Sridhar, Souradeep Dutta, James Weimer, Insup Lee
Next, using these memories we partition the state space into disjoint subsets and compute bounds that should be respected by the neural network in each subset.
1 code implementation • 13 Jun 2022 • Kaustubh Sridhar, Souradeep Dutta, Ramneet Kaur, James Weimer, Oleg Sokolsky, Insup Lee
Algorithm design of AT and its variants are focused on training models at a specified perturbation strength $\epsilon$ and only using the feedback from the performance of that $\epsilon$-robust model to improve the algorithm.
1 code implementation • 25 Feb 2022 • Souradeep Dutta, Kaustubh Sridhar, Osbert Bastani, Edgar Dobriban, James Weimer, Insup Lee, Julia Parish-Morris
We formulate expert intervention as allowing the agent to execute option templates before learning an implementation.
no code implementations • 29 Sep 2021 • Pengyuan Lu, Seungwon Lee, Amanda Watson, David Kent, Insup Lee, Eric Eaton, James Weimer
This tool achieves similar performance, in terms of per-task accuracy and resistance to catastrophic forgetting, as compared to fully labeled data.
1 code implementation • 19 Jul 2021 • Yinjun Wu, James Weimer, Susan B. Davidson
First, to reduce the cost of human annotators, we use Infl, which prioritizes the most influential training samples for cleaning and provides cleaned labels to save the cost of one human annotator.
1 code implementation • 3 Jun 2021 • Kaustubh Sridhar, Oleg Sokolsky, Insup Lee, James Weimer
Improving adversarial robustness of neural networks remains a major challenge.
no code implementations • 30 Apr 2021 • Taylor J. Carpenter, Radoslav Ivanov, Insup Lee, James Weimer
This paper presents ModelGuard, a sampling-based approach to runtime model validation for Lipschitz-continuous models.
no code implementations • 25 Feb 2021 • Sooyong Jang, Radoslav Ivanov, Insup Lee, James Weimer
As machine learning techniques become widely adopted in new domains, especially in safety-critical systems such as autonomous vehicles, it is crucial to provide accurate output uncertainty estimation.
no code implementations • 9 Nov 2020 • Sooyong Jang, Insup Lee, James Weimer
Providing reliable model uncertainty estimates is imperative to enabling robust decision making by autonomous agents and humans alike.
no code implementations • 29 Feb 2020 • Sangdon Park, Osbert Bastani, James Weimer, Insup Lee
Our algorithm uses importance weighting to correct for the shift from the training to the real-world distribution.
no code implementations • 23 Feb 2020 • Yiannis Kantaros, Taylor Carpenter, Kaustubh Sridhar, Yahan Yang, Insup Lee, James Weimer
To highlight this, we demonstrate the efficiency of the proposed detector on ImageNet, a task that is computationally challenging for the majority of relevant defenses, and on physically attacked traffic signs that may be encountered in real-time autonomy applications.
1 code implementation • 5 Nov 2018 • Radoslav Ivanov, James Weimer, Rajeev Alur, George J. Pappas, Insup Lee
This paper presents Verisig, a hybrid system approach to verifying safety properties of closed-loop systems using neural networks as controllers.
Systems and Control
no code implementations • 10 Aug 2017 • Sangdon Park, James Weimer, Insup Lee
Specifically, a generic metric is proposed that is tailored to measure resilience of classification algorithms with respect to worst-case tampering of the training data.