Search Results for author: Upol Ehsan

Found 13 papers, 1 papers with code

Beyond Prompts: Exploring the Design Space of Mixed-Initiative Co-Creativity Systems

1 code implementation3 May 2023 Zhiyu Lin, Upol Ehsan, Rohan Agarwal, Samihan Dani, Vidushi Vashishth, Mark Riedl

We find out that MI-CC systems with more extensive coverage of the design space are rated higher or on par on a variety of creative and goal-completion metrics, demonstrating that wider coverage of the design space can improve user experience and achievement when using the system; Preference varies greatly between expertise groups, suggesting the development of adaptive, personalized MI-CC systems; Participants identified new design space dimensions including scrutability -- the ability to poke and prod at models -- and explainability.

Charting the Sociotechnical Gap in Explainable AI: A Framework to Address the Gap in XAI

no code implementations1 Feb 2023 Upol Ehsan, Koustuv Saha, Munmun De Choudhury, Mark O. Riedl

Utilizing two case studies in distinct domains, we empirically derive a framework that facilitates systematic charting of the sociotechnical gap by connecting AI guidelines in the context of XAI and elucidating how to use them to address the gap.

Explainable Artificial Intelligence (XAI)

Seamful XAI: Operationalizing Seamful Design in Explainable AI

no code implementations12 Nov 2022 Upol Ehsan, Q. Vera Liao, Samir Passi, Mark O. Riedl, Hal Daume III

We found that the Seamful XAI design process helped users foresee AI harms, identify underlying reasons (seams), locate them in the AI's lifecycle, learn how to leverage seamful information to improve XAI and user agency.

Explainable Artificial Intelligence (XAI)

Social Construction of XAI: Do We Need One Definition to Rule Them All?

no code implementations11 Nov 2022 Upol Ehsan, Mark O. Riedl

There is a growing frustration amongst researchers and developers in Explainable AI (XAI) around the lack of consensus around what is meant by 'explainability'.

Explainable Artificial Intelligence (XAI)

The Algorithmic Imprint

no code implementations3 Jun 2022 Upol Ehsan, Ranjit Singh, Jacob Metcalf, Mark O. Riedl

When algorithmic harms emerge, a reasonable response is to stop using the algorithm to resolve concerns related to fairness, accountability, transparency, and ethics (FATE).

Ethics Fairness

Explainability Pitfalls: Beyond Dark Patterns in Explainable AI

no code implementations26 Sep 2021 Upol Ehsan, Mark O. Riedl

To make Explainable AI (XAI) systems trustworthy, understanding harmful effects is just as important as producing well-designed explanations.

Explainable Artificial Intelligence (XAI)

LEx: A Framework for Operationalising Layers of Machine Learning Explanations

no code implementations15 Apr 2021 Ronal Singh, Upol Ehsan, Marc Cheong, Mark O. Riedl, Tim Miller

Several social factors impact how people respond to AI explanations used to justify AI decisions affecting them personally.

BIG-bench Machine Learning Position

Expanding Explainability: Towards Social Transparency in AI systems

no code implementations12 Jan 2021 Upol Ehsan, Q. Vera Liao, Michael Muller, Mark O. Riedl, Justin D. Weisz

We suggested constitutive design elements of ST and developed a conceptual framework to unpack ST's effect and implications at the technical, decision-making, and organizational level.

Decision Making Explainable Artificial Intelligence (XAI)

Human-centered Explainable AI: Towards a Reflective Sociotechnical Approach

no code implementations4 Feb 2020 Upol Ehsan, Mark O. Riedl

In this paper, we introduce Human-centered Explainable AI (HCXAI) as an approach that puts the human at the center of technology design.

Automated Rationale Generation: A Technique for Explainable AI and its Effects on Human Perceptions

no code implementations11 Jan 2019 Upol Ehsan, Pradyumna Tambwekar, Larry Chan, Brent Harrison, Mark Riedl

The second study further explores user preferences between the generated rationales with regard to confidence in the autonomous agent, communicating failure and unexpected behavior.

Explanation Generation

Guiding Reinforcement Learning Exploration Using Natural Language

no code implementations26 Jul 2017 Brent Harrison, Upol Ehsan, Mark O. Riedl

We then use this learned model to guide agent exploration using a modified version of policy shaping to make it more effective at learning in unseen environments.

Machine Translation Q-Learning +3

Rationalization: A Neural Machine Translation Approach to Generating Natural Language Explanations

no code implementations25 Feb 2017 Upol Ehsan, Brent Harrison, Larry Chan, Mark O. Riedl

Results of these evaluations show that neural machine translation is able to accurately generate rationalizations that describe agent behavior, and that rationalizations are more satisfying to humans than other alternative methods of explanation.

Explanation Generation Machine Translation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.