no code implementations • 12 May 2014 • Chien-Ju Ho, Aleksandrs Slivkins, Jennifer Wortman Vaughan
In this paper, we study the requester's problem of dynamically adjusting quality-contingent payments for tasks.
no code implementations • 30 Jul 2014 • Miroslav Dudík, Rafael Frongillo, Jennifer Wortman Vaughan
We study information elicitation in cost-function-based combinatorial prediction markets when the market maker's utility for information decreases over time.
no code implementations • 24 Feb 2016 • Rachel Cummings, David M. Pennock, Jennifer Wortman Vaughan
We consider the design of private prediction markets, financial markets designed to elicit predictions about uncertain events without revealing too much information about market participants' actions or beliefs.
no code implementations • 5 Nov 2016 • Miroslav Dudík, Nika Haghtalab, Haipeng Luo, Robert E. Schapire, Vasilis Syrgkanis, Jennifer Wortman Vaughan
We consider the design of computationally efficient online learning algorithms in an adversarial setting in which the learner has access to an offline optimization oracle.
no code implementations • ACL 2017 • Jennifer Wortman Vaughan
Over the last decade, crowdsourcing has been used to harness the power of human computation to solve tasks that are notoriously difficult to solve with computers alone, such as determining whether or not an image contains a tree, rating the relevance of a website, or verifying the phone number of a business.
1 code implementation • 21 Feb 2018 • Forough Poursabzi-Sangdeh, Daniel G. Goldstein, Jake M. Hofman, Jennifer Wortman Vaughan, Hanna Wallach
With machine learning models being increasingly used to aid decision making even in high-stakes domains, there has been a growing interest in developing interpretable models.
21 code implementations • 23 Mar 2018 • Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé III, Kate Crawford
The machine learning community currently has no standardized process for documenting datasets, which can lead to severe consequences in high-stakes domains.
no code implementations • 1 Jun 2018 • Manish Raghavan, Aleksandrs Slivkins, Jennifer Wortman Vaughan, Zhiwei Steven Wu
Returning to group-level effects, we show that under the same conditions, negative group externalities essentially vanish under the greedy algorithm.
no code implementations • 14 Jun 2018 • Rediet Abebe, Shawndra Hill, Jennifer Wortman Vaughan, Peter M. Small, H. Andrew Schwartz
The lack of comprehensive, high-quality health data in developing nations creates a roadblock for combating the impacts of disease.
no code implementations • 27 Aug 2018 • Lily Hu, Nicole Immorlica, Jennifer Wortman Vaughan
When consequential decisions are informed by algorithmic input, individuals may feel compelled to alter their behavior in order to gain a system's approval.
no code implementations • 13 Dec 2018 • Kenneth Holstein, Jennifer Wortman Vaughan, Hal Daumé III, Miro Dudík, Hanna Wallach
The potential for machine learning (ML) systems to amplify social inequities and unfairness is receiving increasing popular and academic attention.
no code implementations • 4 Jul 2019 • Anhong Guo, Ece Kamar, Jennifer Wortman Vaughan, Hanna Wallach, Meredith Ringel Morris
AI technologies have the potential to dramatically impact the lives of people with disabilities (PWD).
1 code implementation • 29 Oct 2019 • David Alvarez-Melis, Hal Daumé III, Jennifer Wortman Vaughan, Hanna Wallach
Interpretability is an elusive but highly sought-after characteristic of modern machine learning methods.
1 code implementation • ICML 2020 • Rupert Freeman, David M. Pennock, Chara Podimata, Jennifer Wortman Vaughan
First, we want the learning algorithm to be no-regret with respect to the best fixed expert in hindsight.
no code implementations • 19 May 2020 • Manish Raghavan, Aleksandrs Slivkins, Jennifer Wortman Vaughan, Zhiwei Steven Wu
Online learning algorithms, widely used to power search and content optimization on the web, must balance exploration and exploitation, potentially sacrificing the experience of current users in order to gain information that will lead to better decisions in the future.
no code implementations • 10 Mar 2021 • Solon Barocas, Anhong Guo, Ece Kamar, Jacquelyn Krones, Meredith Ringel Morris, Jennifer Wortman Vaughan, Duncan Wadsworth, Hanna Wallach
Disaggregated evaluations of AI systems, in which system performance is assessed and reported separately for different groups of people, are conceptually simple.
1 code implementation • 27 Apr 2021 • David Alvarez-Melis, Harmanpreet Kaur, Hal Daumé III, Hanna Wallach, Jennifer Wortman Vaughan
We take inspiration from the study of human explanation to inform the design and evaluation of interpretability methods in machine learning.
BIG-bench Machine Learning Interpretable Machine Learning +1
1 code implementation • 6 Dec 2021 • Zijie J. Wang, Alex Kale, Harsha Nori, Peter Stella, Mark Nunnally, Duen Horng Chau, Mihaela Vorvoreanu, Jennifer Wortman Vaughan, Rich Caruana
Recent strides in interpretable machine learning (ML) research reveal that models exploit undesirable patterns in the data to make predictions, which potentially causes harms in deployment.
no code implementations • 10 Dec 2021 • Michael Madaio, Lisa Egede, Hariharan Subramonyam, Jennifer Wortman Vaughan, Hanna Wallach
Various tools and practices have been developed to support practitioners in identifying, assessing, and mitigating fairness-related harms caused by AI systems.
1 code implementation • 5 May 2022 • Jessie J. Smith, Saleema Amershi, Solon Barocas, Hanna Wallach, Jennifer Wortman Vaughan
Transparency around limitations can improve the scientific rigor of research, help ensure appropriate interpretation of research findings, and make research claims more credible.
no code implementations • 6 Jun 2022 • Amy K. Heger, Liz B. Marquis, Mihaela Vorvoreanu, Hanna Wallach, Jennifer Wortman Vaughan
Despite the fact that data documentation frameworks are often motivated from the perspective of responsible AI, participants did not make the connection between the questions that they were asked to answer and their responsible AI implications.
2 code implementations • 30 Jun 2022 • Zijie J. Wang, Alex Kale, Harsha Nori, Peter Stella, Mark E. Nunnally, Duen Horng Chau, Mihaela Vorvoreanu, Jennifer Wortman Vaughan, Rich Caruana
Machine learning (ML) interpretability techniques can reveal undesirable patterns in data that models exploit to make predictions--potentially causing harms once deployed.
no code implementations • 4 Aug 2022 • Neha Hulkund, Nicolo Fusi, Jennifer Wortman Vaughan, David Alvarez-Melis
We propose a method to identify and characterize distribution shifts in classification datasets based on optimal transport.
no code implementations • 22 Nov 2022 • Charvi Rastogi, Ivan Stelmakh, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, Jennifer Wortman Vaughan, Zhenyu Xue, Hal Daumé III, Emma Pierson, Nihar B. Shah
In a top-tier computer science conference (NeurIPS 2021) with more than 23, 000 submitting authors and 9, 000 submitted papers, we survey the authors on three questions: (i) their predicted probability of acceptance for each of their papers, (ii) their perceived ranking of their own papers based on scientific contribution, and (iii) the change in their perception about their own papers after seeing the reviews.
no code implementations • 18 Jan 2023 • Valerie Chen, Q. Vera Liao, Jennifer Wortman Vaughan, Gagan Bansal
AI explanations are often mentioned as a way to improve human-AI decision-making, but empirical studies have not found consistent evidence of explanations' effectiveness and, on the contrary, suggest that they can increase overreliance when the AI system is wrong.
no code implementations • 14 Feb 2023 • Helena Vasconcelos, Gagan Bansal, Adam Fourney, Q. Vera Liao, Jennifer Wortman Vaughan
Through a mixed-methods study with 30 programmers, we compare three conditions: providing the AI system's code completion alone, highlighting tokens with the lowest likelihood of being generated by the underlying generative model, and highlighting tokens with the highest predicted likelihood of being edited by a programmer.
no code implementations • 21 Feb 2023 • Q. Vera Liao, Hariharan Subramonyam, Jennifer Wang, Jennifer Wortman Vaughan
To address this problem, we bridge the literature on AI design and AI transparency to explore whether and how frameworks for transparent model reporting can support design ideation with pre-trained models.
1 code implementation • 27 Feb 2023 • Zijie J. Wang, Jennifer Wortman Vaughan, Rich Caruana, Duen Horng Chau
Machine learning (ML) recourse techniques are increasingly used in high-stakes domains, providing end users with actions to alter ML predictions, but they assume ML developers understand what input variables can be changed.
no code implementations • 2 Jun 2023 • Q. Vera Liao, Jennifer Wortman Vaughan
It is paramount to pursue new approaches to provide transparency for LLMs, and years of research at the intersection of AI and human-computer interaction (HCI) highlight that we must do so with a human-centered perspective: Transparency is fundamentally about supporting appropriate human understanding, and this understanding is sought by different stakeholders with different goals in different contexts.
no code implementations • 5 Jun 2023 • Alina Beygelzimer, Yann N. Dauphin, Percy Liang, Jennifer Wortman Vaughan
We present the NeurIPS 2021 consistency experiment, a larger-scale variant of the 2014 NeurIPS experiment in which 10% of conference submissions were reviewed by two independent committees to quantify the randomness in the review process.
no code implementations • 11 Dec 2023 • Anthony Cintron Roman, Jennifer Wortman Vaughan, Valerie See, Steph Ballard, Jehu Torres, Caleb Robinson, Juan M. Lavista Ferres
This paper introduces a no-code, machine-readable documentation framework for open datasets, with a focus on responsible AI (RAI) considerations.
no code implementations • 1 May 2024 • Sunnie S. Y. Kim, Q. Vera Liao, Mihaela Vorvoreanu, Stephanie Ballard, Jennifer Wortman Vaughan
However, there has been little empirical work examining how users perceive and act upon LLMs' expressions of uncertainty.