no code implementations • NAACL (ACL) 2022 • Jordan Boyd-Graber, Samuel Carton, Shi Feng, Q. Vera Liao, Tania Lombrozo, Alison Smith-Renner, Chenhao Tan
The NLP community are increasingly interested in providing explanations for NLP models to help people make sense of model behavior and potentially improve human interaction with models.
no code implementations • 12 Feb 2025 • Sunnie S. Y. Kim, Jennifer Wortman Vaughan, Q. Vera Liao, Tania Lombrozo, Olga Russakovsky
Large language models (LLMs) can produce erroneous responses that sound fluent and convincing, raising the risk that users will rely on these responses as if they were correct.
no code implementations • 22 Jan 2025 • Jingshu Li, Yitian Yang, Q. Vera Liao, Junti Zhang, Yi-chieh Lee
Complementary collaboration between humans and AI is essential for human-AI decision making.
no code implementations • 20 Nov 2024 • Angel Hsing-Chi Hwang, Q. Vera Liao, Su Lin Blodgett, Alexandra Olteanu, Adam Trischler
Given the rising proliferation and diversity of AI writing assistance tools, especially those powered by large language models (LLMs), both writers and readers may have concerns about the impact of these tools on the authenticity of writing work.
1 code implementation • 13 Jun 2024 • Yu Lu Liu, Su Lin Blodgett, Jackie Chi Kit Cheung, Q. Vera Liao, Alexandra Olteanu, Ziang Xiao
Benchmarking is seen as critical to assessing progress in NLP.
no code implementations • 1 May 2024 • Sunnie S. Y. Kim, Q. Vera Liao, Mihaela Vorvoreanu, Stephanie Ballard, Jennifer Wortman Vaughan
However, there has been little empirical work examining how users perceive and act upon LLMs' expressions of uncertainty.
no code implementations • 8 Feb 2024 • Nikhil Sharma, Q. Vera Liao, Ziang Xiao
Large language models (LLMs) powered conversational search systems have already been used by hundreds of millions of people, and are believed to bring many benefits over conventional search.
no code implementations • 2 Jun 2023 • Q. Vera Liao, Jennifer Wortman Vaughan
It is paramount to pursue new approaches to provide transparency for LLMs, and years of research at the intersection of AI and human-computer interaction (HCI) highlight that we must do so with a human-centered perspective: Transparency is fundamentally about supporting appropriate human understanding, and this understanding is sought by different stakeholders with different goals in different contexts.
no code implementations • 1 Jun 2023 • Q. Vera Liao, Ziang Xiao
The recent development of generative large language models (LLMs) poses new challenges for model evaluation that the research community and industry have been grappling with.
1 code implementation • 24 May 2023 • Ziang Xiao, Susu Zhang, Vivian Lai, Q. Vera Liao
We address a fundamental challenge in Natural Language Generation (NLG) model evaluation -- the design and evaluation of evaluation metrics.
no code implementations • 17 Apr 2023 • Ziang Xiao, Xingdi Yuan, Q. Vera Liao, Rania Abdelghani, Pierre-Yves Oudeyer
In this study, we explored the use of large language models (LLMs) in supporting deductive coding, a major category of qualitative analysis where researchers use pre-determined codebooks to label the data into a fixed set of codes.
no code implementations • 17 Apr 2023 • Haotian Li, Yun Wang, Q. Vera Liao, Huamin Qu
Data storytelling plays an important role in data workers' daily jobs since it boosts team collaboration and public communication.
no code implementations • 22 Feb 2023 • Steven Moore, Q. Vera Liao, Hariharan Subramonyam
To design with AI models, user experience (UX) designers must assess the fit between the model and user needs.
no code implementations • 21 Feb 2023 • Q. Vera Liao, Hariharan Subramonyam, Jennifer Wang, Jennifer Wortman Vaughan
To address this problem, we bridge the literature on AI design and AI transparency to explore whether and how frameworks for transparent model reporting can support design ideation with pre-trained models.
no code implementations • 16 Feb 2023 • Mohammad Tahaei, Marios Constantinides, Daniele Quercia, Sean Kennedy, Michael Muller, Simone Stumpf, Q. Vera Liao, Ricardo Baeza-Yates, Lora Aroyo, Jess Holbrook, Ewa Luger, Michael Madaio, Ilana Golbin Blumenfeld, Maria De-Arteaga, Jessica Vitak, Alexandra Olteanu
In recent years, the CHI community has seen significant growth in research on Human-Centered Responsible Artificial Intelligence.
no code implementations • 14 Feb 2023 • Helena Vasconcelos, Gagan Bansal, Adam Fourney, Q. Vera Liao, Jennifer Wortman Vaughan
Through a mixed-methods study with 30 programmers, we compare three conditions: providing the AI system's code completion alone, highlighting tokens with the lowest likelihood of being generated by the underlying generative model, and highlighting tokens with the highest predicted likelihood of being edited by a programmer.
no code implementations • 23 Jan 2023 • Vivian Lai, Yiming Zhang, Chacha Chen, Q. Vera Liao, Chenhao Tan
As a result, current XAI techniques are often found to be hard to use and lack effectiveness.
no code implementations • 18 Jan 2023 • Valerie Chen, Q. Vera Liao, Jennifer Wortman Vaughan, Gagan Bansal
AI explanations are often mentioned as a way to improve human-AI decision-making, but empirical studies have not found consistent evidence of explanations' effectiveness and, on the contrary, suggest that they can increase overreliance when the AI system is wrong.
no code implementations • 12 Nov 2022 • Upol Ehsan, Q. Vera Liao, Samir Passi, Mark O. Riedl, Hal Daume III
We found that the Seamful XAI design process helped users foresee AI harms, identify underlying reasons (seams), locate them in the AI's lifecycle, learn how to leverage seamful information to improve XAI and user agency.
no code implementations • 22 Jun 2022 • Q. Vera Liao, Yunfeng Zhang, Ronny Luss, Finale Doshi-Velez, Amit Dhurandhar
We argue that one way to close the gap is to develop evaluation methods that account for different user requirements in these usage contexts.
no code implementations • 29 Apr 2022 • Q. Vera Liao, S. Shyam Sundar
Current literature and public discourse on "trust in AI" are often focused on the principles underlying trustworthy AI, with insufficient attention paid to how people develop trust.
no code implementations • 25 Apr 2022 • Vivian Lai, Samuel Carton, Rajat Bhatnagar, Q. Vera Liao, Yunfeng Zhang, Chenhao Tan
Despite impressive performance in many benchmark datasets, AI models can still make mistakes, especially among out-of-distribution examples.
no code implementations • 10 Feb 2022 • Jiao Sun, Q. Vera Liao, Michael Muller, Mayank Agarwal, Stephanie Houde, Kartik Talamadupula, Justin D. Weisz
Using scenario-based design and question-driven XAI design approaches, we explore users' explainability needs for GenAI in three software engineering use cases: natural language to code, code translation, and code auto-completion.
no code implementations • 21 Dec 2021 • Vivian Lai, Chacha Chen, Q. Vera Liao, Alison Smith-Renner, Chenhao Tan
Besides developing AI technologies for this purpose, the emerging field of human-AI decision making must embrace empirical approaches to form a foundational understanding of how humans interact and work with AI to make decisions.
no code implementations • 20 Oct 2021 • Q. Vera Liao, Kush R. Varshney
In this chapter, we begin with a high-level overview of the technical landscape of XAI algorithms, then selectively survey our own and other recent HCI works that take human-centered approaches to design, evaluate, and provide conceptual and methodological tools for XAI.
no code implementations • 24 Sep 2021 • Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilovic, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang
As artificial intelligence and machine learning algorithms become increasingly prevalent in society, multiple stakeholders are calling for these algorithms to provide explanations.
no code implementations • 28 Jul 2021 • Upol Ehsan, Samir Passi, Q. Vera Liao, Larry Chan, I-Hsiang Lee, Michael Muller, Mark O. Riedl
Explainability of AI systems is critical for users to take informed actions.
1 code implementation • 2 Jun 2021 • Soumya Ghosh, Q. Vera Liao, Karthikeyan Natesan Ramamurthy, Jiri Navratil, Prasanna Sattigeri, Kush R. Varshney, Yunfeng Zhang
In this paper, we describe an open source Python toolkit named Uncertainty Quantification 360 (UQ360) for the uncertainty quantification of AI models.
no code implementations • 9 Apr 2021 • Shweta Narkar, Yunfeng Zhang, Q. Vera Liao, Dakuo Wang, Justin D Weisz
Automated Machine Learning (AutoML) is a rapidly growing set of technologies that automate the model development pipeline by searching model space and generating candidate models.
no code implementations • 8 Apr 2021 • Q. Vera Liao, Milena Pribić, Jaesik Han, Sarah Miller, Daby Sow
A pervasive design issue of AI systems is their explainability--how to provide appropriate information to help users understand the AI.
no code implementations • 29 Jan 2021 • Soya Park, April Wang, Ban Kawas, Q. Vera Liao, David Piorkowski, Marina Danilevsky
Data scientists face a steep learning curve in understanding a new domain for which they want to build machine learning (ML) models.
no code implementations • 12 Jan 2021 • Upol Ehsan, Q. Vera Liao, Michael Muller, Mark O. Riedl, Justin D. Weisz
We suggested constitutive design elements of ST and developed a conceptual framework to unpack ST's effect and implications at the technical, decision-making, and organizational level.
no code implementations • 7 Jan 2021 • Dakuo Wang, Q. Vera Liao, Yunfeng Zhang, Udayan Khurana, Horst Samulowitz, Soya Park, Michael Muller, Lisa Amini
There is an active research thread in AI, \autoai, that aims to develop systems for automating end-to-end the DS/ML Lifecycle.
no code implementations • 15 Nov 2020 • Umang Bhatt, Javier Antorán, Yunfeng Zhang, Q. Vera Liao, Prasanna Sattigeri, Riccardo Fogliato, Gabrielle Gauthier Melançon, Ranganath Krishnan, Jason Stanley, Omesh Tickoo, Lama Nachman, Rumi Chunara, Madhulika Srikumar, Adrian Weller, Alice Xiang
Explainability attempts to provide reasons for a machine learning model's behavior to stakeholders.
no code implementations • EMNLP 2020 • Kshitij Fadnis, Nathaniel Mills, Jatin Ganhotra, Haggai Roitman, Gaurav Pandey, Doron Cohen, Yosi Mass, Shai Erera, Chulaka Gunasekara, Danish Contractor, Siva Patel, Q. Vera Liao, Sachindra Joshi, Luis Lastras, David Konopnicki
Customer support agents play a crucial role as an interface between an organization and its end-users.
no code implementations • 6 Sep 2020 • Bhavya Ghai, Q. Vera Liao, Yunfeng Zhang, Klaus Mueller
The similarity score between feature rankings provided by the annotator and the local model explanation is used to assign a weight to each corresponding committee model.
no code implementations • 4 Apr 2020 • Bhavya Ghai, Q. Vera Liao, Yunfeng Zhang, Klaus Mueller
Social biases based on gender, race, etc.
no code implementations • 24 Jan 2020 • Bhavya Ghai, Q. Vera Liao, Yunfeng Zhang, Rachel Bellamy, Klaus Mueller
We conducted an empirical study comparing the model learning outcomes, feedback content and experience with XAL, to that of traditional AL and coactive learning (providing the model's prediction without the explanation).
no code implementations • 8 Jan 2020 • Q. Vera Liao, Daniel Gruen, Sarah Miller
A surge of interest in explainable AI (XAI) has led to a vast collection of algorithmic work on the topic.
Explainable Artificial Intelligence (XAI)
Open-Ended Question Answering
no code implementations • 7 Jan 2020 • Yunfeng Zhang, Q. Vera Liao, Rachel K. E. Bellamy
In these scenarios, full automation is often undesirable, not only due to the significance of the outcome, but also because human experts can draw on their domain knowledge complementary to the model's to ensure task success.
no code implementations • 13 Dec 2019 • Q. Vera Liao, Michael Muller
Two general routes have been followed to develop artificial agents that are sensitive to human values---a top-down approach to encode values into the agents, and a bottom-up approach to learn from human actions, whether from real-world interactions or stories.
no code implementations • NeurIPS Workshop Document_Intelligen 2019 • Song Feng, Kshitij Fadni, Q. Vera Liao, Luis A. Lastras
We introduce Doc2Dial, an end-to-end framework for generating conversational data grounded in business documents via crowdsourcing.
2 code implementations • 6 Sep 2019 • Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilović, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang
Equally important, we provide a taxonomy to help entities requiring explanations to navigate the space of explanation methods, not only those in the toolkit but also in the broader literature on explainability.
no code implementations • 25 May 2019 • Ziang Xiao, Michelle X. Zhou, Q. Vera Liao, Gloria Mark, Changyan Chi, Wenxi Chen, Huahai Yang
The rise of increasingly more powerful chatbots offers a new way to collect information through conversational surveys, where a chatbot asks open-ended questions, interprets a user's free-text responses, and probes answers whenever needed.
no code implementations • 14 Dec 2018 • Neil Mallinar, Abhishek Shah, Rajendra Ugrani, Ayush Gupta, Manikandan Gurusankar, Tin Kam Ho, Q. Vera Liao, Yunfeng Zhang, Rachel K. E. Bellamy, Robert Yates, Chris Desmarais, Blake McGregor
We report on a user study that shows positive user feedback for this new approach to build conversational agents, and demonstrates the effectiveness of using data programming for auto-labeling.
no code implementations • 4 Aug 2017 • Q. Vera Liao, Biplav Srivastava, Pavan Kapanipathi
Dialog is a natural modality for interaction between customers and businesses in the service industry.