Search Results for author: Q. Vera Liao

Found 41 papers, 3 papers with code

Human-Centered Evaluation of Explanations

no code implementations NAACL (ACL) 2022 Jordan Boyd-Graber, Samuel Carton, Shi Feng, Q. Vera Liao, Tania Lombrozo, Alison Smith-Renner, Chenhao Tan

The NLP community are increasingly interested in providing explanations for NLP models to help people make sense of model behavior and potentially improve human interaction with models.

Generative Echo Chamber? Effects of LLM-Powered Search Systems on Diverse Information Seeking

no code implementations8 Feb 2024 Nikhil Sharma, Q. Vera Liao, Ziang Xiao

Large language models (LLMs) powered conversational search systems have already been used by hundreds of millions of people, and are believed to bring many benefits over conventional search.

Conversational Search

AI Transparency in the Age of LLMs: A Human-Centered Research Roadmap

no code implementations2 Jun 2023 Q. Vera Liao, Jennifer Wortman Vaughan

It is paramount to pursue new approaches to provide transparency for LLMs, and years of research at the intersection of AI and human-computer interaction (HCI) highlight that we must do so with a human-centered perspective: Transparency is fundamentally about supporting appropriate human understanding, and this understanding is sought by different stakeholders with different goals in different contexts.

Rethinking Model Evaluation as Narrowing the Socio-Technical Gap

no code implementations1 Jun 2023 Q. Vera Liao, Ziang Xiao

The recent development of generative and large language models (LLMs) poses new challenges for model evaluation that the research community and industry are grappling with.

Explainable Artificial Intelligence (XAI) nlg evaluation +1

Evaluating Evaluation Metrics: A Framework for Analyzing NLG Evaluation Metrics using Measurement Theory

1 code implementation24 May 2023 Ziang Xiao, Susu Zhang, Vivian Lai, Q. Vera Liao

We address a fundamental challenge in Natural Language Generation (NLG) model evaluation -- the design and evaluation of evaluation metrics.

nlg evaluation Text Generation +1

Supporting Qualitative Analysis with Large Language Models: Combining Codebook with GPT-3 for Deductive Coding

no code implementations17 Apr 2023 Ziang Xiao, Xingdi Yuan, Q. Vera Liao, Rania Abdelghani, Pierre-Yves Oudeyer

In this study, we explored the use of large language models (LLMs) in supporting deductive coding, a major category of qualitative analysis where researchers use pre-determined codebooks to label the data into a fixed set of codes.

Why is AI not a Panacea for Data Workers? An Interview Study on Human-AI Collaboration in Data Storytelling

no code implementations17 Apr 2023 Haotian Li, Yun Wang, Q. Vera Liao, Huamin Qu

Data storytelling plays an important role in data workers' daily jobs since it boosts team collaboration and public communication.

fAIlureNotes: Supporting Designers in Understanding the Limits of AI Models for Computer Vision Tasks

no code implementations22 Feb 2023 Steven Moore, Q. Vera Liao, Hariharan Subramonyam

To design with AI models, user experience (UX) designers must assess the fit between the model and user needs.

Designerly Understanding: Information Needs for Model Transparency to Support Design Ideation for AI-Powered User Experience

no code implementations21 Feb 2023 Q. Vera Liao, Hariharan Subramonyam, Jennifer Wang, Jennifer Wortman Vaughan

To address this problem, we bridge the literature on AI design and AI transparency to explore whether and how frameworks for transparent model reporting can support design ideation with pre-trained models.

Generation Probabilities Are Not Enough: Exploring the Effectiveness of Uncertainty Highlighting in AI-Powered Code Completions

no code implementations14 Feb 2023 Helena Vasconcelos, Gagan Bansal, Adam Fourney, Q. Vera Liao, Jennifer Wortman Vaughan

Through a mixed-methods study with 30 programmers, we compare three conditions: providing the AI system's code completion alone, highlighting tokens with the lowest likelihood of being generated by the underlying generative model, and highlighting tokens with the highest predicted likelihood of being edited by a programmer.

Code Completion

Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations

no code implementations18 Jan 2023 Valerie Chen, Q. Vera Liao, Jennifer Wortman Vaughan, Gagan Bansal

AI explanations are often mentioned as a way to improve human-AI decision-making, but empirical studies have not found consistent evidence of explanations' effectiveness and, on the contrary, suggest that they can increase overreliance when the AI system is wrong.

Decision Making

Seamful XAI: Operationalizing Seamful Design in Explainable AI

no code implementations12 Nov 2022 Upol Ehsan, Q. Vera Liao, Samir Passi, Mark O. Riedl, Hal Daume III

While black-boxing AI systems can make the user experience seamless, hiding the seams risks disempowering users to mitigate fallouts from AI mistakes.

Explainable Artificial Intelligence (XAI)

Designing for Responsible Trust in AI Systems: A Communication Perspective

no code implementations29 Apr 2022 Q. Vera Liao, S. Shyam Sundar

Current literature and public discourse on "trust in AI" are often focused on the principles underlying trustworthy AI, with insufficient attention paid to how people develop trust.

Human-AI Collaboration via Conditional Delegation: A Case Study of Content Moderation

no code implementations25 Apr 2022 Vivian Lai, Samuel Carton, Rajat Bhatnagar, Q. Vera Liao, Yunfeng Zhang, Chenhao Tan

Despite impressive performance in many benchmark datasets, AI models can still make mistakes, especially among out-of-distribution examples.

Open-Ended Question Answering

Investigating Explainability of Generative AI for Code through Scenario-based Design

no code implementations10 Feb 2022 Jiao Sun, Q. Vera Liao, Michael Muller, Mayank Agarwal, Stephanie Houde, Kartik Talamadupula, Justin D. Weisz

Using scenario-based design and question-driven XAI design approaches, we explore users' explainability needs for GenAI in three software engineering use cases: natural language to code, code translation, and code auto-completion.

Code Translation Explainable Artificial Intelligence (XAI)

Towards a Science of Human-AI Decision Making: A Survey of Empirical Studies

no code implementations21 Dec 2021 Vivian Lai, Chacha Chen, Q. Vera Liao, Alison Smith-Renner, Chenhao Tan

Besides developing AI technologies for this purpose, the emerging field of human-AI decision making must embrace empirical approaches to form a foundational understanding of how humans interact and work with AI to make decisions.

Decision Making

Human-Centered Explainable AI (XAI): From Algorithms to User Experiences

no code implementations20 Oct 2021 Q. Vera Liao, Kush R. Varshney

In this chapter, we begin with a high-level overview of the technical landscape of XAI algorithms, then selectively survey our own and other recent HCI works that take human-centered approaches to design, evaluate, and provide conceptual and methodological tools for XAI.

Explainable Artificial Intelligence (XAI) Navigate

The Who in Explainable AI: How AI Background Shapes Perceptions of AI Explanations

no code implementations28 Jul 2021 Upol Ehsan, Samir Passi, Q. Vera Liao, Larry Chan, I-Hsiang Lee, Michael Muller, Mark O. Riedl

In this paper, we conduct a mixed-methods study of how two different groups of whos--people with and without a background in AI--perceive different types of AI explanations.

Explainable Artificial Intelligence (XAI)

Model LineUpper: Supporting Interactive Model Comparison at Multiple Levels for AutoML

no code implementations9 Apr 2021 Shweta Narkar, Yunfeng Zhang, Q. Vera Liao, Dakuo Wang, Justin D Weisz

Automated Machine Learning (AutoML) is a rapidly growing set of technologies that automate the model development pipeline by searching model space and generating candidate models.

AutoML Explainable Artificial Intelligence (XAI) +1

Question-Driven Design Process for Explainable AI User Experiences

no code implementations8 Apr 2021 Q. Vera Liao, Milena Pribić, Jaesik Han, Sarah Miller, Daby Sow

A pervasive design issue of AI systems is their explainability--how to provide appropriate information to help users understand the AI.

Explainable Artificial Intelligence (XAI)

Facilitating Knowledge Sharing from Domain Experts to Data Scientists for Building NLP Models

no code implementations29 Jan 2021 Soya Park, April Wang, Ban Kawas, Q. Vera Liao, David Piorkowski, Marina Danilevsky

Data scientists face a steep learning curve in understanding a new domain for which they want to build machine learning (ML) models.

Expanding Explainability: Towards Social Transparency in AI systems

no code implementations12 Jan 2021 Upol Ehsan, Q. Vera Liao, Michael Muller, Mark O. Riedl, Justin D. Weisz

We suggested constitutive design elements of ST and developed a conceptual framework to unpack ST's effect and implications at the technical, decision-making, and organizational level.

Decision Making Explainable Artificial Intelligence (XAI)

How Much Automation Does a Data Scientist Want?

no code implementations7 Jan 2021 Dakuo Wang, Q. Vera Liao, Yunfeng Zhang, Udayan Khurana, Horst Samulowitz, Soya Park, Michael Muller, Lisa Amini

There is an active research thread in AI, \autoai, that aims to develop systems for automating end-to-end the DS/ML Lifecycle.

AutoML Marketing

Active Learning++: Incorporating Annotator's Rationale using Local Model Explanation

no code implementations6 Sep 2020 Bhavya Ghai, Q. Vera Liao, Yunfeng Zhang, Klaus Mueller

The similarity score between feature rankings provided by the annotator and the local model explanation is used to assign a weight to each corresponding committee model.

Active Learning Feature Importance

Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience

no code implementations24 Jan 2020 Bhavya Ghai, Q. Vera Liao, Yunfeng Zhang, Rachel Bellamy, Klaus Mueller

We conducted an empirical study comparing the model learning outcomes, feedback content and experience with XAL, to that of traditional AL and coactive learning (providing the model's prediction without the explanation).

Active Learning Explainable Artificial Intelligence (XAI)

Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making

no code implementations7 Jan 2020 Yunfeng Zhang, Q. Vera Liao, Rachel K. E. Bellamy

In these scenarios, full automation is often undesirable, not only due to the significance of the outcome, but also because human experts can draw on their domain knowledge complementary to the model's to ensure task success.

Decision Making

Enabling Value Sensitive AI Systems through Participatory Design Fictions

no code implementations13 Dec 2019 Q. Vera Liao, Michael Muller

Two general routes have been followed to develop artificial agents that are sensitive to human values---a top-down approach to encode values into the agents, and a bottom-up approach to learn from human actions, whether from real-world interactions or stories.

Doc2Dial: a Framework for Dialogue Composition Grounded in Business Documents

no code implementations NeurIPS Workshop Document_Intelligen 2019 Song Feng, Kshitij Fadni, Q. Vera Liao, Luis A. Lastras

We introduce Doc2Dial, an end-to-end framework for generating conversational data grounded in business documents via crowdsourcing.

Tell Me About Yourself: Using an AI-Powered Chatbot to Conduct Conversational Surveys with Open-ended Questions

no code implementations25 May 2019 Ziang Xiao, Michelle X. Zhou, Q. Vera Liao, Gloria Mark, Changyan Chi, Wenxi Chen, Huahai Yang

The rise of increasingly more powerful chatbots offers a new way to collect information through conversational surveys, where a chatbot asks open-ended questions, interprets a user's free-text responses, and probes answers whenever needed.

Chatbot Informativeness +1

Bootstrapping Conversational Agents With Weak Supervision

no code implementations14 Dec 2018 Neil Mallinar, Abhishek Shah, Rajendra Ugrani, Ayush Gupta, Manikandan Gurusankar, Tin Kam Ho, Q. Vera Liao, Yunfeng Zhang, Rachel K. E. Bellamy, Robert Yates, Chris Desmarais, Blake McGregor

We report on a user study that shows positive user feedback for this new approach to build conversational agents, and demonstrates the effectiveness of using data programming for auto-labeling.

A Measure for Dialog Complexity and its Application in Streamlining Service Operations

no code implementations4 Aug 2017 Q. Vera Liao, Biplav Srivastava, Pavan Kapanipathi

Dialog is a natural modality for interaction between customers and businesses in the service industry.

Cannot find the paper you are looking for? You can Submit a new open access paper.