Search Results for author: Haiyi Zhu

Found 16 papers, 1 papers with code

Value Cards: An Educational Toolkit for Teaching Social Impacts of Machine Learning through Deliberation

no code implementations22 Oct 2020 Hong Shen, Wesley Hanwen Deng, Aditi Chattopadhyay, Zhiwei Steven Wu, Xu Wang, Haiyi Zhu

In this paper, we present Value Card, an educational toolkit to inform students and practitioners of the social impacts of different machine learning models via deliberation.

BIG-bench Machine Learning Ethics +1

"Brilliant AI Doctor" in Rural China: Tensions and Challenges in AI-Powered CDSS Deployment

no code implementations4 Jan 2021 Dakuo Wang, Liuping Wang, Zhan Zhang, Ding Wang, Haiyi Zhu, Yvonne Gao, Xiangmin Fan, Feng Tian

Artificial intelligence (AI) technology has been increasingly used in the implementation of advanced Clinical Decision Support Systems (CDSS).

Decision Making

A Sandbox Tool to Bias(Stress)-Test Fairness Algorithms

no code implementations21 Apr 2022 Nil-Jana Akpinar, Manish Nagireddy, Logan Stapleton, Hao-Fei Cheng, Haiyi Zhu, Steven Wu, Hoda Heidari

This stylized setup offers the distinct capability of testing fairness interventions beyond observational data and against an unbiased benchmark.

Fairness

Exploring How Machine Learning Practitioners (Try To) Use Fairness Toolkits

no code implementations13 May 2022 Wesley Hanwen Deng, Manish Nagireddy, Michelle Seng Ah Lee, Jatinder Singh, Zhiwei Steven Wu, Kenneth Holstein, Haiyi Zhu

Recent years have seen the development of many open-source ML fairness toolkits aimed at helping ML practitioners assess and address unfairness in their systems.

BIG-bench Machine Learning Fairness

Imagining new futures beyond predictive systems in child welfare: A qualitative study with impacted stakeholders

no code implementations18 May 2022 Logan Stapleton, Min Hun Lee, Diana Qing, Marya Wright, Alexandra Chouldechova, Kenneth Holstein, Zhiwei Steven Wu, Haiyi Zhu

In this work, we conducted a set of seven design workshops with 35 stakeholders who have been impacted by the child welfare system or who work in it to understand their beliefs and concerns around PRMs, and to engage them in imagining new uses of data and technologies in the child welfare system.

Decision Making

A Validity Perspective on Evaluating the Justified Use of Data-driven Decision-making Algorithms

no code implementations30 Jun 2022 Amanda Coston, Anna Kawakami, Haiyi Zhu, Ken Holstein, Hoda Heidari

Recent research increasingly brings to question the appropriateness of using predictive tools in complex, real-world tasks.

Decision Making

Understanding Frontline Workers' and Unhoused Individuals' Perspectives on AI Used in Homeless Services

no code implementations17 Mar 2023 Tzu-Sheng Kuo, Hong Shen, Jisoo Geum, Nev Jones, Jason I. Hong, Haiyi Zhu, Kenneth Holstein

Our findings demonstrate that stakeholders, even without AI knowledge, can provide specific and critical feedback on an AI system's design and deployment, if empowered to do so.

Agent-based Simulation for Online Mental Health Matching

no code implementations20 Mar 2023 YuHan Liu, Anna Fang, Glen Moriarty, Robert Kraut, Haiyi Zhu

Online mental health communities (OMHCs) are an effective and accessible channel to give and receive social support for individuals with mental and emotional issues.

Recentering Validity Considerations through Early-Stage Deliberations Around AI and Policy Design

no code implementations26 Mar 2023 Anna Kawakami, Amanda Coston, Haiyi Zhu, Hoda Heidari, Kenneth Holstein

AI-based decision-making tools are rapidly spreading across a range of real-world, complex domains like healthcare, criminal justice, and child welfare.

Decision Making Position

Seeing Seeds Beyond Weeds: Green Teaming Generative AI for Beneficial Uses

no code implementations30 May 2023 Logan Stapleton, Jordan Taylor, Sarah Fox, Tongshuang Wu, Haiyi Zhu

Finally, we discuss how our use cases demonstrate green teaming as both a practical design method and a mode of critique, which problematizes and subverts current understandings of harms and values in generative AI.

LLMs as Workers in Human-Computational Algorithms? Replicating Crowdsourcing Pipelines with LLMs

no code implementations19 Jul 2023 Tongshuang Wu, Haiyi Zhu, Maya Albayrak, Alexis Axon, Amanda Bertsch, Wenxing Deng, Ziqi Ding, Bill Guo, Sireesh Gururaja, Tzu-Sheng Kuo, Jenny T. Liang, Ryan Liu, Ihita Mandal, Jeremiah Milbauer, Xiaolin Ni, Namrata Padmanabhan, Subhashini Ramkumar, Alexis Sudjianto, Jordan Taylor, Ying-Jui Tseng, Patricia Vaidos, Zhijin Wu, Wei Wu, Chenyang Yang

We reflect on human and LLMs' different sensitivities to instructions, stress the importance of enabling human-facing safeguards for LLMs, and discuss the potential of training humans and LLMs with complementary skill sets.

Training Towards Critical Use: Learning to Situate AI Predictions Relative to Human Knowledge

no code implementations30 Aug 2023 Anna Kawakami, Luke Guerdan, Yanghuidi Cheng, Matthew Lee, Scott Carter, Nikos Arechiga, Kate Glazko, Haiyi Zhu, Kenneth Holstein

A growing body of research has explored how to support humans in making better use of AI-based decision support, including via training and onboarding.

Decision Making

Cannot find the paper you are looking for? You can Submit a new open access paper.