Search Results for author: Q. Vera Liao

Found 27 papers, 2 papers with code

Connecting Algorithmic Research and Usage Contexts: A Perspective of Contextualized Evaluation for Explainable AI

no code implementations22 Jun 2022 Q. Vera Liao, Yunfeng Zhang, Ronny Luss, Finale Doshi-Velez, Amit Dhurandhar

We argue that one way to close the gap is to develop evaluation methods that account for different user requirements in these usage contexts.

Designing for Responsible Trust in AI Systems: A Communication Perspective

no code implementations29 Apr 2022 Q. Vera Liao, S. Shyam Sundar

Current literature and public discourse on "trust in AI" are often focused on the principles underlying trustworthy AI, with insufficient attention paid to how people develop trust.

Human-AI Collaboration via Conditional Delegation: A Case Study of Content Moderation

no code implementations25 Apr 2022 Vivian Lai, Samuel Carton, Rajat Bhatnagar, Q. Vera Liao, Yunfeng Zhang, Chenhao Tan

Despite impressive performance in many benchmark datasets, AI models can still make mistakes, especially among out-of-distribution examples.

Investigating Explainability of Generative AI for Code through Scenario-based Design

no code implementations10 Feb 2022 Jiao Sun, Q. Vera Liao, Michael Muller, Mayank Agarwal, Stephanie Houde, Kartik Talamadupula, Justin D. Weisz

Using scenario-based design and question-driven XAI design approaches, we explore users' explainability needs for GenAI in three software engineering use cases: natural language to code, code translation, and code auto-completion.

Code Translation

Towards a Science of Human-AI Decision Making: A Survey of Empirical Studies

no code implementations21 Dec 2021 Vivian Lai, Chacha Chen, Q. Vera Liao, Alison Smith-Renner, Chenhao Tan

Besides developing AI technologies for this purpose, the emerging field of human-AI decision making must embrace empirical approaches to form a foundational understanding of how humans interact and work with AI to make decisions.

Decision Making

Human-Centered Explainable AI (XAI): From Algorithms to User Experiences

no code implementations20 Oct 2021 Q. Vera Liao, Kush R. Varshney

In this chapter, we begin with a high-level overview of the technical landscape of XAI algorithms, then selectively survey our own and other recent HCI works that take human-centered approaches to design, evaluate, and provide conceptual and methodological tools for XAI.

The Who in Explainable AI: How AI Background Shapes Perceptions of AI Explanations

no code implementations28 Jul 2021 Upol Ehsan, Samir Passi, Q. Vera Liao, Larry Chan, I-Hsiang Lee, Michael Muller, Mark O. Riedl

In this paper, we conduct a mixed-methods study of how two different groups of whos--people with and without a background in AI--perceive different types of AI explanations.

Uncertainty Quantification 360: A Holistic Toolkit for Quantifying and Communicating the Uncertainty of AI

1 code implementation2 Jun 2021 Soumya Ghosh, Q. Vera Liao, Karthikeyan Natesan Ramamurthy, Jiri Navratil, Prasanna Sattigeri, Kush R. Varshney, Yunfeng Zhang

In this paper, we describe an open source Python toolkit named Uncertainty Quantification 360 (UQ360) for the uncertainty quantification of AI models.


Model LineUpper: Supporting Interactive Model Comparison at Multiple Levels for AutoML

no code implementations9 Apr 2021 Shweta Narkar, Yunfeng Zhang, Q. Vera Liao, Dakuo Wang, Justin D Weisz

Automated Machine Learning (AutoML) is a rapidly growing set of technologies that automate the model development pipeline by searching model space and generating candidate models.

AutoML Feature Importance

Question-Driven Design Process for Explainable AI User Experiences

no code implementations8 Apr 2021 Q. Vera Liao, Milena Pribić, Jaesik Han, Sarah Miller, Daby Sow

A pervasive design issue of AI systems is their explainability--how to provide appropriate information to help users understand the AI.

Facilitating Knowledge Sharing from Domain Experts to Data Scientists for Building NLP Models

no code implementations29 Jan 2021 Soya Park, April Wang, Ban Kawas, Q. Vera Liao, David Piorkowski, Marina Danilevsky

Data scientists face a steep learning curve in understanding a new domain for which they want to build machine learning (ML) models.

Expanding Explainability: Towards Social Transparency in AI systems

no code implementations12 Jan 2021 Upol Ehsan, Q. Vera Liao, Michael Muller, Mark O. Riedl, Justin D. Weisz

We suggested constitutive design elements of ST and developed a conceptual framework to unpack ST's effect and implications at the technical, decision-making, and organizational level.

Decision Making

How Much Automation Does a Data Scientist Want?

no code implementations7 Jan 2021 Dakuo Wang, Q. Vera Liao, Yunfeng Zhang, Udayan Khurana, Horst Samulowitz, Soya Park, Michael Muller, Lisa Amini

There is an active research thread in AI, \autoai, that aims to develop systems for automating end-to-end the DS/ML Lifecycle.


Active Learning++: Incorporating Annotator's Rationale using Local Model Explanation

no code implementations6 Sep 2020 Bhavya Ghai, Q. Vera Liao, Yunfeng Zhang, Klaus Mueller

The similarity score between feature rankings provided by the annotator and the local model explanation is used to assign a weight to each corresponding committee model.

Active Learning Feature Importance

Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience

no code implementations24 Jan 2020 Bhavya Ghai, Q. Vera Liao, Yunfeng Zhang, Rachel Bellamy, Klaus Mueller

We conducted an empirical study comparing the model learning outcomes, feedback content and experience with XAL, to that of traditional AL and coactive learning (providing the model's prediction without the explanation).

Active Learning

Questioning the AI: Informing Design Practices for Explainable AI User Experiences

no code implementations8 Jan 2020 Q. Vera Liao, Daniel Gruen, Sarah Miller

A surge of interest in explainable AI (XAI) has led to a vast collection of algorithmic work on the topic.

Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making

no code implementations7 Jan 2020 Yunfeng Zhang, Q. Vera Liao, Rachel K. E. Bellamy

In these scenarios, full automation is often undesirable, not only due to the significance of the outcome, but also because human experts can draw on their domain knowledge complementary to the model's to ensure task success.

Decision Making

Enabling Value Sensitive AI Systems through Participatory Design Fictions

no code implementations13 Dec 2019 Q. Vera Liao, Michael Muller

Two general routes have been followed to develop artificial agents that are sensitive to human values---a top-down approach to encode values into the agents, and a bottom-up approach to learn from human actions, whether from real-world interactions or stories.

Doc2Dial: a Framework for Dialogue Composition Grounded in Business Documents

no code implementations NeurIPS Workshop Document_Intelligen 2019 Song Feng, Kshitij Fadni, Q. Vera Liao, Luis A. Lastras

We introduce Doc2Dial, an end-to-end framework for generating conversational data grounded in business documents via crowdsourcing.

Tell Me About Yourself: Using an AI-Powered Chatbot to Conduct Conversational Surveys with Open-ended Questions

no code implementations25 May 2019 Ziang Xiao, Michelle X. Zhou, Q. Vera Liao, Gloria Mark, Changyan Chi, Wenxi Chen, Huahai Yang

The rise of increasingly more powerful chatbots offers a new way to collect information through conversational surveys, where a chatbot asks open-ended questions, interprets a user's free-text responses, and probes answers whenever needed.

Chatbot Informativeness

Bootstrapping Conversational Agents With Weak Supervision

no code implementations14 Dec 2018 Neil Mallinar, Abhishek Shah, Rajendra Ugrani, Ayush Gupta, Manikandan Gurusankar, Tin Kam Ho, Q. Vera Liao, Yunfeng Zhang, Rachel K. E. Bellamy, Robert Yates, Chris Desmarais, Blake McGregor

We report on a user study that shows positive user feedback for this new approach to build conversational agents, and demonstrates the effectiveness of using data programming for auto-labeling.

A Measure for Dialog Complexity and its Application in Streamlining Service Operations

no code implementations4 Aug 2017 Q. Vera Liao, Biplav Srivastava, Pavan Kapanipathi

Dialog is a natural modality for interaction between customers and businesses in the service industry.

Cannot find the paper you are looking for? You can Submit a new open access paper.