no code implementations • 13 Dec 2018 • Kenneth Holstein, Jennifer Wortman Vaughan, Hal Daumé III, Miro Dudík, Hanna Wallach
The potential for machine learning (ML) systems to amplify social inequities and unfairness is receiving increasing popular and academic attention.
no code implementations • 2 Apr 2021 • Kenneth Holstein, Vincent Aleven
Recent work has explored how complementary strengths of humans and artificial intelligence (AI) systems might be productively combined.
no code implementations • 27 Apr 2021 • Kenneth Holstein, Shayan Doroudi
The development of educational AI (AIEd) systems has often been motivated by their potential to promote educational equity and reduce achievement gaps across different groups of learners -- for example, by scaling up the benefits of one-on-one human tutoring to a broader audience, or by filling gaps in existing educational services.
no code implementations • 4 Nov 2021 • Donghoon Shin, Sachin Grover, Kenneth Holstein, Adam Perer
Explainable AI (XAI) is a promising means of supporting human-AI collaborations for high-stakes visual detection tasks, such as damage detection tasks from satellite imageries, as fully-automated approaches are unlikely to be perfectly safe and reliable.
no code implementations • 5 Apr 2022 • Anna Kawakami, Venkatesh Sivaraman, Hao-Fei Cheng, Logan Stapleton, Yanghuidi Cheng, Diana Qing, Adam Perer, Zhiwei Steven Wu, Haiyi Zhu, Kenneth Holstein
AI-based decision support tools (ADS) are increasingly used to augment human decision-making in high-stakes, social contexts.
no code implementations • 22 Apr 2022 • Charvi Rastogi, Liu Leqi, Kenneth Holstein, Hoda Heidari
To illustrate how our taxonomy can be used to investigate complementarity, we provide a mathematical aggregation framework to examine enabling conditions for complementarity.
no code implementations • 13 May 2022 • Wesley Hanwen Deng, Manish Nagireddy, Michelle Seng Ah Lee, Jatinder Singh, Zhiwei Steven Wu, Kenneth Holstein, Haiyi Zhu
Recent years have seen the development of many open-source ML fairness toolkits aimed at helping ML practitioners assess and address unfairness in their systems.
no code implementations • 18 May 2022 • Logan Stapleton, Min Hun Lee, Diana Qing, Marya Wright, Alexandra Chouldechova, Kenneth Holstein, Zhiwei Steven Wu, Haiyi Zhu
In this work, we conducted a set of seven design workshops with 35 stakeholders who have been impacted by the child welfare system or who work in it to understand their beliefs and concerns around PRMs, and to engage them in imagining new uses of data and technologies in the child welfare system.
no code implementations • 6 Jul 2022 • Frederic Gmeiner, Kenneth Holstein, Nikolas Martelaro
Here we reframe human-AI collaboration as a learning problem: Inspired by research on team learning, we hypothesize that similar learning strategies that apply to human-human teams might also increase the collaboration effectiveness and quality of humans working with co-creative generative systems.
no code implementations • 28 Jul 2022 • Kenneth Holstein, Maria De-Arteaga, Lakshmi Tumati, Yanghuidi Cheng
Our findings indicate that presenting prompts about unobservables can change how humans integrate model outputs and unobservables, but do not necessarily lead to improved performance.
no code implementations • 7 Oct 2022 • Wesley Hanwen Deng, Bill Boyuan Guo, Alicia DeVrio, Hong Shen, Motahhare Eslami, Kenneth Holstein
Recent years have seen growing interest among both researchers and practitioners in user-engaged approaches to algorithm auditing, which directly engage users in detecting problematic behaviors in algorithmic systems.
no code implementations • 13 Feb 2023 • Luke Guerdan, Amanda Coston, Zhiwei Steven Wu, Kenneth Holstein
In this paper, we identify five sources of target variable bias that can impact the validity of proxy labels in human-AI decision-making tasks.
1 code implementation • 22 Feb 2023 • Luke Guerdan, Amanda Coston, Kenneth Holstein, Zhiwei Steven Wu
We also develop a method for estimating treatment-dependent measurement error parameters when these are unknown in advance.
no code implementations • 1 Mar 2023 • Frederic Gmeiner, Humphrey Yang, Lining Yao, Kenneth Holstein, Nikolas Martelaro
AI-based design tools are proliferating in professional software to assist engineering and industrial designers in complex manufacturing and design tasks.
no code implementations • 17 Mar 2023 • Tzu-Sheng Kuo, Hong Shen, Jisoo Geum, Nev Jones, Jason I. Hong, Haiyi Zhu, Kenneth Holstein
Our findings demonstrate that stakeholders, even without AI knowledge, can provide specific and critical feedback on an AI system's design and deployment, if empowered to do so.
no code implementations • 26 Mar 2023 • Anna Kawakami, Amanda Coston, Haiyi Zhu, Hoda Heidari, Kenneth Holstein
AI-based decision-making tools are rapidly spreading across a range of real-world, complex domains like healthcare, criminal justice, and child welfare.
no code implementations • 30 Aug 2023 • Anna Kawakami, Luke Guerdan, Yanghuidi Cheng, Matthew Lee, Scott Carter, Nikos Arechiga, Kate Glazko, Haiyi Zhu, Kenneth Holstein
A growing body of research has explored how to support humans in making better use of AI-based decision support, including via training and onboarding.
1 code implementation • 21 Feb 2024 • Tzu-Sheng Kuo, Aaron Halfaker, Zirui Cheng, Jiwoo Kim, Meng-Hsin Wu, Tongshuang Wu, Kenneth Holstein, Haiyi Zhu
AI tools are increasingly deployed in community contexts.
no code implementations • 1 Apr 2024 • Luke Guerdan, Amanda Coston, Kenneth Holstein, Zhiwei Steven Wu
However, it is challenging to compare predictive performance against an existing decision-making policy that is generally under-specified and dependent on unobservable factors.