no code implementations • 5 Oct 2024 • Zana Buçinca, Siddharth Swaroop, Amanda E. Paluch, Finale Doshi-Velez, Krzysztof Z. Gajos
People's decision-making abilities often fail to improve or may even erode when they rely on AI for decision-support, even when the AI provides informative explanations.
no code implementations • 24 Jun 2024 • Tianhao Wang, Zana Buçinca, Zilin Ma
Numerous approaches have been recently proposed for learning fair representations that mitigate unfair outcomes in prediction tasks.
no code implementations • 9 Mar 2024 • Zana Buçinca, Siddharth Swaroop, Amanda E. Paluch, Susan A. Murphy, Krzysztof Z. Gajos
Across two experiments (N=316 and N=964), our results demonstrated that people interacting with policies optimized for accuracy achieve significantly better accuracy -- and even human-AI complementarity -- compared to those interacting with any other type of AI support.
no code implementations • 12 Jun 2023 • Siddharth Swaroop, Zana Buçinca, Krzysztof Z. Gajos, Finale Doshi-Velez
The precise benefit can depend on both the user and task.
no code implementations • 16 May 2022 • Maurice Jakesch, Zana Buçinca, Saleema Amershi, Alexandra Olteanu
Compared to the US-representative sample, AI practitioners appear to consider responsible AI values as less important and emphasize a different set of values.
no code implementations • 19 Feb 2021 • Zana Buçinca, Maja Barbara Malaya, Krzysztof Z. Gajos
To audit our work for intervention-generated inequalities, we investigated whether our interventions benefited equally people with different levels of Need for Cognition (i. e., motivation to engage in effortful mental activities).
no code implementations • 22 Jan 2020 • Zana Buçinca, Phoebe Lin, Krzysztof Z. Gajos, Elena L. Glassman
The results of our experiments demonstrate that evaluations with proxy tasks did not predict the results of the evaluations with the actual decision-making tasks.