Search Results for author: Kenneth Holstein

Found 19 papers, 2 papers with code

Predictive Performance Comparison of Decision Policies Under Confounding

no code implementations1 Apr 2024 Luke Guerdan, Amanda Coston, Kenneth Holstein, Zhiwei Steven Wu

However, it is challenging to compare predictive performance against an existing decision-making policy that is generally under-specified and dependent on unobservable factors.

Causal Inference Decision Making +1

Training Towards Critical Use: Learning to Situate AI Predictions Relative to Human Knowledge

no code implementations30 Aug 2023 Anna Kawakami, Luke Guerdan, Yanghuidi Cheng, Matthew Lee, Scott Carter, Nikos Arechiga, Kate Glazko, Haiyi Zhu, Kenneth Holstein

A growing body of research has explored how to support humans in making better use of AI-based decision support, including via training and onboarding.

Decision Making

Recentering Validity Considerations through Early-Stage Deliberations Around AI and Policy Design

no code implementations26 Mar 2023 Anna Kawakami, Amanda Coston, Haiyi Zhu, Hoda Heidari, Kenneth Holstein

AI-based decision-making tools are rapidly spreading across a range of real-world, complex domains like healthcare, criminal justice, and child welfare.

Decision Making Position

Understanding Frontline Workers' and Unhoused Individuals' Perspectives on AI Used in Homeless Services

no code implementations17 Mar 2023 Tzu-Sheng Kuo, Hong Shen, Jisoo Geum, Nev Jones, Jason I. Hong, Haiyi Zhu, Kenneth Holstein

Our findings demonstrate that stakeholders, even without AI knowledge, can provide specific and critical feedback on an AI system's design and deployment, if empowered to do so.

Exploring Challenges and Opportunities to Support Designers in Learning to Co-create with AI-based Manufacturing Design Tools

no code implementations1 Mar 2023 Frederic Gmeiner, Humphrey Yang, Lining Yao, Kenneth Holstein, Nikolas Martelaro

AI-based design tools are proliferating in professional software to assist engineering and industrial designers in complex manufacturing and design tasks.

Counterfactual Prediction Under Outcome Measurement Error

1 code implementation22 Feb 2023 Luke Guerdan, Amanda Coston, Kenneth Holstein, Zhiwei Steven Wu

We also develop a method for estimating treatment-dependent measurement error parameters when these are unknown in advance.

counterfactual Decision Making +1

Ground(less) Truth: A Causal Framework for Proxy Labels in Human-Algorithm Decision-Making

no code implementations13 Feb 2023 Luke Guerdan, Amanda Coston, Zhiwei Steven Wu, Kenneth Holstein

In this paper, we identify five sources of target variable bias that can impact the validity of proxy labels in human-AI decision-making tasks.

Decision Making

Understanding Practices, Challenges, and Opportunities for User-Engaged Algorithm Auditing in Industry Practice

no code implementations7 Oct 2022 Wesley Hanwen Deng, Bill Boyuan Guo, Alicia DeVrio, Hong Shen, Motahhare Eslami, Kenneth Holstein

Recent years have seen growing interest among both researchers and practitioners in user-engaged approaches to algorithm auditing, which directly engage users in detecting problematic behaviors in algorithmic systems.

Toward Supporting Perceptual Complementarity in Human-AI Collaboration via Reflection on Unobservables

no code implementations28 Jul 2022 Kenneth Holstein, Maria De-Arteaga, Lakshmi Tumati, Yanghuidi Cheng

Our findings indicate that presenting prompts about unobservables can change how humans integrate model outputs and unobservables, but do not necessarily lead to improved performance.

Team Learning as a Lens for Designing Human-AI Co-Creative Systems

no code implementations6 Jul 2022 Frederic Gmeiner, Kenneth Holstein, Nikolas Martelaro

Here we reframe human-AI collaboration as a learning problem: Inspired by research on team learning, we hypothesize that similar learning strategies that apply to human-human teams might also increase the collaboration effectiveness and quality of humans working with co-creative generative systems.

Imagining new futures beyond predictive systems in child welfare: A qualitative study with impacted stakeholders

no code implementations18 May 2022 Logan Stapleton, Min Hun Lee, Diana Qing, Marya Wright, Alexandra Chouldechova, Kenneth Holstein, Zhiwei Steven Wu, Haiyi Zhu

In this work, we conducted a set of seven design workshops with 35 stakeholders who have been impacted by the child welfare system or who work in it to understand their beliefs and concerns around PRMs, and to engage them in imagining new uses of data and technologies in the child welfare system.

Decision Making

Exploring How Machine Learning Practitioners (Try To) Use Fairness Toolkits

no code implementations13 May 2022 Wesley Hanwen Deng, Manish Nagireddy, Michelle Seng Ah Lee, Jatinder Singh, Zhiwei Steven Wu, Kenneth Holstein, Haiyi Zhu

Recent years have seen the development of many open-source ML fairness toolkits aimed at helping ML practitioners assess and address unfairness in their systems.

BIG-bench Machine Learning Fairness

A Taxonomy of Human and ML Strengths in Decision-Making to Investigate Human-ML Complementarity

no code implementations22 Apr 2022 Charvi Rastogi, Liu Leqi, Kenneth Holstein, Hoda Heidari

To illustrate how our taxonomy can be used to investigate complementarity, we provide a mathematical aggregation framework to examine enabling conditions for complementarity.

Decision Making

Characterizing Human Explanation Strategies to Inform the Design of Explainable AI for Building Damage Assessment

no code implementations4 Nov 2021 Donghoon Shin, Sachin Grover, Kenneth Holstein, Adam Perer

Explainable AI (XAI) is a promising means of supporting human-AI collaborations for high-stakes visual detection tasks, such as damage detection tasks from satellite imageries, as fully-automated approaches are unlikely to be perfectly safe and reliable.

Explainable Artificial Intelligence (XAI)

Equity and Artificial Intelligence in Education: Will "AIEd" Amplify or Alleviate Inequities in Education?

no code implementations27 Apr 2021 Kenneth Holstein, Shayan Doroudi

The development of educational AI (AIEd) systems has often been motivated by their potential to promote educational equity and reduce achievement gaps across different groups of learners -- for example, by scaling up the benefits of one-on-one human tutoring to a broader audience, or by filling gaps in existing educational services.

Designing for human-AI complementarity in K-12 education

no code implementations2 Apr 2021 Kenneth Holstein, Vincent Aleven

Recent work has explored how complementary strengths of humans and artificial intelligence (AI) systems might be productively combined.

Decision Making

Improving fairness in machine learning systems: What do industry practitioners need?

no code implementations13 Dec 2018 Kenneth Holstein, Jennifer Wortman Vaughan, Hal Daumé III, Miro Dudík, Hanna Wallach

The potential for machine learning (ML) systems to amplify social inequities and unfairness is receiving increasing popular and academic attention.

BIG-bench Machine Learning Fairness

Cannot find the paper you are looking for? You can Submit a new open access paper.