no code implementations • 2 Feb 2024 • Thao Le, Tim Miller, Liz Sonenberg, Ronal Singh
Prior research on AI-assisted human decision-making has explored several different explainable AI (XAI) approaches.
no code implementations • 10 Mar 2023 • Thao Le, Tim Miller, Ronal Singh, Liz Sonenberg
In this paper, we show that counterfactual explanations of confidence scores help study participants to better understand and better trust a machine learning model's prediction.
no code implementations • 6 Jun 2022 • Thao Le, Tim Miller, Ronal Singh, Liz Sonenberg
In this paper, we show that counterfactual explanations of confidence scores help users better understand and better trust an AI model's prediction in human-subject studies.
no code implementations • 29 Apr 2021 • Ronal Singh, Tim Miller, Darryn Reid
Results show that participants' constraints improved the expected return of the plans by 10% ($p < 0. 05$) relative to baseline plans, demonstrating that human insight can be used in collaborative planning for resilience.
no code implementations • 15 Apr 2021 • Ronal Singh, Upol Ehsan, Marc Cheong, Mark O. Riedl, Tim Miller
Several social factors impact how people respond to AI explanations used to justify AI decisions affecting them personally.
no code implementations • 3 Feb 2021 • Ronal Singh, Paul Dourish, Piers Howe, Tim Miller, Liz Sonenberg, Eduardo Velloso, Frank Vetere
This paper investigates the prospects of using directive explanations to assist people in achieving recourse of machine learning decisions.