no code implementations • 21 Sep 2023 • Yunye Gong, Yi Yao, Xiao Lin, Ajay Divakaran, Melinda Gervasio
Existing conformal prediction algorithms estimate prediction intervals at target confidence levels to characterize the performance of a regression model on new test samples.
1 code implementation • 18 Jul 2023 • Pedro Sequeira, Melinda Gervasio
However, existing systems lack the necessary mechanisms to provide humans with a holistic view of their competence, presenting an impediment to their adoption, particularly in critical applications where the decisions an agent makes can have significant consequences.
1 code implementation • 11 Nov 2022 • Pedro Sequeira, Jesse Hostetler, Melinda Gervasio
In this paper, we extend a recently-proposed framework for explainable RL that is based on analyses of "interestingness."
1 code implementation • 17 Aug 2022 • Pedro Sequeira, Daniel Elenius, Jesse Hostetler, Melinda Gervasio
We present a framework for learning comprehensible models of sequential decision tasks in which agent strategies are characterized using temporal logic formulas.
no code implementations • 15 Jul 2022 • Eric Yeh, Pedro Sequeira, Jesse Hostetler, Melinda Gervasio
We present a novel generative method for producing unseen and plausible counterfactual examples for reinforcement learning (RL) agents based upon outcome variables that characterize agent behavior.
no code implementations • ICCV 2021 • Yunye Gong, Xiao Lin, Yi Yao, Thomas G. Dietterich, Ajay Divakaran, Melinda Gervasio
Existing calibration algorithms address the problem of covariate shift via unsupervised domain adaptation.
2 code implementations • 19 Dec 2019 • Pedro Sequeira, Melinda Gervasio
We propose an explainable reinforcement learning (XRL) framework that analyzes an agent's history of interaction with the environment to extract interestingness elements that help explain its behavior.