1 code implementation • 3 May 2023 • Zhiyu Lin, Upol Ehsan, Rohan Agarwal, Samihan Dani, Vidushi Vashishth, Mark Riedl
We find out that MI-CC systems with more extensive coverage of the design space are rated higher or on par on a variety of creative and goal-completion metrics, demonstrating that wider coverage of the design space can improve user experience and achievement when using the system; Preference varies greatly between expertise groups, suggesting the development of adaptive, personalized MI-CC systems; Participants identified new design space dimensions including scrutability -- the ability to poke and prod at models -- and explainability.
no code implementations • 1 Feb 2023 • Upol Ehsan, Koustuv Saha, Munmun De Choudhury, Mark O. Riedl
Utilizing two case studies in distinct domains, we empirically derive a framework that facilitates systematic charting of the sociotechnical gap by connecting AI guidelines in the context of XAI and elucidating how to use them to address the gap.
no code implementations • 12 Nov 2022 • Upol Ehsan, Q. Vera Liao, Samir Passi, Mark O. Riedl, Hal Daume III
We found that the Seamful XAI design process helped users foresee AI harms, identify underlying reasons (seams), locate them in the AI's lifecycle, learn how to leverage seamful information to improve XAI and user agency.
no code implementations • 11 Nov 2022 • Upol Ehsan, Mark O. Riedl
There is a growing frustration amongst researchers and developers in Explainable AI (XAI) around the lack of consensus around what is meant by 'explainability'.
no code implementations • 3 Jun 2022 • Upol Ehsan, Ranjit Singh, Jacob Metcalf, Mark O. Riedl
When algorithmic harms emerge, a reasonable response is to stop using the algorithm to resolve concerns related to fairness, accountability, transparency, and ethics (FATE).
no code implementations • 26 Sep 2021 • Upol Ehsan, Mark O. Riedl
To make Explainable AI (XAI) systems trustworthy, understanding harmful effects is just as important as producing well-designed explanations.
no code implementations • 28 Jul 2021 • Upol Ehsan, Samir Passi, Q. Vera Liao, Larry Chan, I-Hsiang Lee, Michael Muller, Mark O. Riedl
Explainability of AI systems is critical for users to take informed actions.
no code implementations • 15 Apr 2021 • Ronal Singh, Upol Ehsan, Marc Cheong, Mark O. Riedl, Tim Miller
Several social factors impact how people respond to AI explanations used to justify AI decisions affecting them personally.
no code implementations • 12 Jan 2021 • Upol Ehsan, Q. Vera Liao, Michael Muller, Mark O. Riedl, Justin D. Weisz
We suggested constitutive design elements of ST and developed a conceptual framework to unpack ST's effect and implications at the technical, decision-making, and organizational level.
no code implementations • 4 Feb 2020 • Upol Ehsan, Mark O. Riedl
In this paper, we introduce Human-centered Explainable AI (HCXAI) as an approach that puts the human at the center of technology design.
no code implementations • 11 Jan 2019 • Upol Ehsan, Pradyumna Tambwekar, Larry Chan, Brent Harrison, Mark Riedl
The second study further explores user preferences between the generated rationales with regard to confidence in the autonomous agent, communicating failure and unexpected behavior.
no code implementations • 26 Jul 2017 • Brent Harrison, Upol Ehsan, Mark O. Riedl
We then use this learned model to guide agent exploration using a modified version of policy shaping to make it more effective at learning in unseen environments.
no code implementations • 25 Feb 2017 • Upol Ehsan, Brent Harrison, Larry Chan, Mark O. Riedl
Results of these evaluations show that neural machine translation is able to accurately generate rationalizations that describe agent behavior, and that rationalizations are more satisfying to humans than other alternative methods of explanation.