Search Results for author: Jennifer Wortman Vaughan

Found 31 papers, 9 papers with code

Open Datasheets: Machine-readable Documentation for Open Datasets and Responsible AI Assessments

no code implementations11 Dec 2023 Anthony Cintron Roman, Jennifer Wortman Vaughan, Valerie See, Steph Ballard, Jehu Torres, Caleb Robinson, Juan M. Lavista Ferres

This paper introduces a no-code, machine-readable documentation framework for open datasets, with a focus on responsible AI (RAI) considerations.

Decision Making

Has the Machine Learning Review Process Become More Arbitrary as the Field Has Grown? The NeurIPS 2021 Consistency Experiment

no code implementations5 Jun 2023 Alina Beygelzimer, Yann N. Dauphin, Percy Liang, Jennifer Wortman Vaughan

We present the NeurIPS 2021 consistency experiment, a larger-scale variant of the 2014 NeurIPS experiment in which 10% of conference submissions were reviewed by two independent committees to quantify the randomness in the review process.

AI Transparency in the Age of LLMs: A Human-Centered Research Roadmap

no code implementations2 Jun 2023 Q. Vera Liao, Jennifer Wortman Vaughan

It is paramount to pursue new approaches to provide transparency for LLMs, and years of research at the intersection of AI and human-computer interaction (HCI) highlight that we must do so with a human-centered perspective: Transparency is fundamentally about supporting appropriate human understanding, and this understanding is sought by different stakeholders with different goals in different contexts.

GAM Coach: Towards Interactive and User-centered Algorithmic Recourse

1 code implementation27 Feb 2023 Zijie J. Wang, Jennifer Wortman Vaughan, Rich Caruana, Duen Horng Chau

Machine learning (ML) recourse techniques are increasingly used in high-stakes domains, providing end users with actions to alter ML predictions, but they assume ML developers understand what input variables can be changed.

Additive models counterfactual

Designerly Understanding: Information Needs for Model Transparency to Support Design Ideation for AI-Powered User Experience

no code implementations21 Feb 2023 Q. Vera Liao, Hariharan Subramonyam, Jennifer Wang, Jennifer Wortman Vaughan

To address this problem, we bridge the literature on AI design and AI transparency to explore whether and how frameworks for transparent model reporting can support design ideation with pre-trained models.

Generation Probabilities Are Not Enough: Exploring the Effectiveness of Uncertainty Highlighting in AI-Powered Code Completions

no code implementations14 Feb 2023 Helena Vasconcelos, Gagan Bansal, Adam Fourney, Q. Vera Liao, Jennifer Wortman Vaughan

Through a mixed-methods study with 30 programmers, we compare three conditions: providing the AI system's code completion alone, highlighting tokens with the lowest likelihood of being generated by the underlying generative model, and highlighting tokens with the highest predicted likelihood of being edited by a programmer.

Code Completion

Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations

no code implementations18 Jan 2023 Valerie Chen, Q. Vera Liao, Jennifer Wortman Vaughan, Gagan Bansal

AI explanations are often mentioned as a way to improve human-AI decision-making, but empirical studies have not found consistent evidence of explanations' effectiveness and, on the contrary, suggest that they can increase overreliance when the AI system is wrong.

Decision Making

How do Authors' Perceptions of their Papers Compare with Co-authors' Perceptions and Peer-review Decisions?

no code implementations22 Nov 2022 Charvi Rastogi, Ivan Stelmakh, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, Jennifer Wortman Vaughan, Zhenyu Xue, Hal Daumé III, Emma Pierson, Nihar B. Shah

In a top-tier computer science conference (NeurIPS 2021) with more than 23, 000 submitting authors and 9, 000 submitted papers, we survey the authors on three questions: (i) their predicted probability of acceptance for each of their papers, (ii) their perceived ranking of their own papers based on scientific contribution, and (iii) the change in their perception about their own papers after seeing the reviews.

Interpretable Distribution Shift Detection using Optimal Transport

no code implementations4 Aug 2022 Neha Hulkund, Nicolo Fusi, Jennifer Wortman Vaughan, David Alvarez-Melis

We propose a method to identify and characterize distribution shifts in classification datasets based on optimal transport.

Interpretability, Then What? Editing Machine Learning Models to Reflect Human Knowledge and Values

2 code implementations30 Jun 2022 Zijie J. Wang, Alex Kale, Harsha Nori, Peter Stella, Mark E. Nunnally, Duen Horng Chau, Mihaela Vorvoreanu, Jennifer Wortman Vaughan, Rich Caruana

Machine learning (ML) interpretability techniques can reveal undesirable patterns in data that models exploit to make predictions--potentially causing harms once deployed.

Additive models BIG-bench Machine Learning +1

Understanding Machine Learning Practitioners' Data Documentation Perceptions, Needs, Challenges, and Desiderata

no code implementations6 Jun 2022 Amy K. Heger, Liz B. Marquis, Mihaela Vorvoreanu, Hanna Wallach, Jennifer Wortman Vaughan

Despite the fact that data documentation frameworks are often motivated from the perspective of responsible AI, participants did not make the connection between the questions that they were asked to answer and their responsible AI implications.

BIG-bench Machine Learning

REAL ML: Recognizing, Exploring, and Articulating Limitations of Machine Learning Research

1 code implementation5 May 2022 Jessie J. Smith, Saleema Amershi, Solon Barocas, Hanna Wallach, Jennifer Wortman Vaughan

Transparency around limitations can improve the scientific rigor of research, help ensure appropriate interpretation of research findings, and make research claims more credible.

BIG-bench Machine Learning

Assessing the Fairness of AI Systems: AI Practitioners' Processes, Challenges, and Needs for Support

no code implementations10 Dec 2021 Michael Madaio, Lisa Egede, Hariharan Subramonyam, Jennifer Wortman Vaughan, Hanna Wallach

Various tools and practices have been developed to support practitioners in identifying, assessing, and mitigating fairness-related harms caused by AI systems.

Fairness

GAM Changer: Editing Generalized Additive Models with Interactive Visualization

1 code implementation6 Dec 2021 Zijie J. Wang, Alex Kale, Harsha Nori, Peter Stella, Mark Nunnally, Duen Horng Chau, Mihaela Vorvoreanu, Jennifer Wortman Vaughan, Rich Caruana

Recent strides in interpretable machine learning (ML) research reveal that models exploit undesirable patterns in the data to make predictions, which potentially causes harms in deployment.

Additive models Interpretable Machine Learning

Designing Disaggregated Evaluations of AI Systems: Choices, Considerations, and Tradeoffs

no code implementations10 Mar 2021 Solon Barocas, Anhong Guo, Ece Kamar, Jacquelyn Krones, Meredith Ringel Morris, Jennifer Wortman Vaughan, Duncan Wadsworth, Hanna Wallach

Disaggregated evaluations of AI systems, in which system performance is assessed and reported separately for different groups of people, are conceptually simple.

Greedy Algorithm almost Dominates in Smoothed Contextual Bandits

no code implementations19 May 2020 Manish Raghavan, Aleksandrs Slivkins, Jennifer Wortman Vaughan, Zhiwei Steven Wu

Online learning algorithms, widely used to power search and content optimization on the web, must balance exploration and exploitation, potentially sacrificing the experience of current users in order to gain information that will lead to better decisions in the future.

Multi-Armed Bandits

No-Regret and Incentive-Compatible Online Learning

1 code implementation ICML 2020 Rupert Freeman, David M. Pennock, Chara Podimata, Jennifer Wortman Vaughan

First, we want the learning algorithm to be no-regret with respect to the best fixed expert in hindsight.

Weight of Evidence as a Basis for Human-Oriented Explanations

1 code implementation29 Oct 2019 David Alvarez-Melis, Hal Daumé III, Jennifer Wortman Vaughan, Hanna Wallach

Interpretability is an elusive but highly sought-after characteristic of modern machine learning methods.

Philosophy

Improving fairness in machine learning systems: What do industry practitioners need?

no code implementations13 Dec 2018 Kenneth Holstein, Jennifer Wortman Vaughan, Hal Daumé III, Miro Dudík, Hanna Wallach

The potential for machine learning (ML) systems to amplify social inequities and unfairness is receiving increasing popular and academic attention.

BIG-bench Machine Learning Fairness

The Disparate Effects of Strategic Manipulation

no code implementations27 Aug 2018 Lily Hu, Nicole Immorlica, Jennifer Wortman Vaughan

When consequential decisions are informed by algorithmic input, individuals may feel compelled to alter their behavior in order to gain a system's approval.

General Classification

Using Search Queries to Understand Health Information Needs in Africa

no code implementations14 Jun 2018 Rediet Abebe, Shawndra Hill, Jennifer Wortman Vaughan, Peter M. Small, H. Andrew Schwartz

The lack of comprehensive, high-quality health data in developing nations creates a roadblock for combating the impacts of disease.

Misconceptions

The Externalities of Exploration and How Data Diversity Helps Exploitation

no code implementations1 Jun 2018 Manish Raghavan, Aleksandrs Slivkins, Jennifer Wortman Vaughan, Zhiwei Steven Wu

Returning to group-level effects, we show that under the same conditions, negative group externalities essentially vanish under the greedy algorithm.

Multi-Armed Bandits

Datasheets for Datasets

21 code implementations23 Mar 2018 Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé III, Kate Crawford

The machine learning community currently has no standardized process for documenting datasets, which can lead to severe consequences in high-stakes domains.

BIG-bench Machine Learning

Manipulating and Measuring Model Interpretability

1 code implementation21 Feb 2018 Forough Poursabzi-Sangdeh, Daniel G. Goldstein, Jake M. Hofman, Jennifer Wortman Vaughan, Hanna Wallach

With machine learning models being increasingly used to aid decision making even in high-stakes domains, there has been a growing interest in developing interpretable models.

BIG-bench Machine Learning Decision Making +1

Tutorial: Making Better Use of the Crowd

no code implementations ACL 2017 Jennifer Wortman Vaughan

Over the last decade, crowdsourcing has been used to harness the power of human computation to solve tasks that are notoriously difficult to solve with computers alone, such as determining whether or not an image contains a tree, rating the relevance of a website, or verifying the phone number of a business.

Oracle-Efficient Online Learning and Auction Design

no code implementations5 Nov 2016 Miroslav Dudík, Nika Haghtalab, Haipeng Luo, Robert E. Schapire, Vasilis Syrgkanis, Jennifer Wortman Vaughan

We consider the design of computationally efficient online learning algorithms in an adversarial setting in which the learner has access to an offline optimization oracle.

The Possibilities and Limitations of Private Prediction Markets

no code implementations24 Feb 2016 Rachel Cummings, David M. Pennock, Jennifer Wortman Vaughan

We consider the design of private prediction markets, financial markets designed to elicit predictions about uncertain events without revealing too much information about market participants' actions or beliefs.

Market Making with Decreasing Utility for Information

no code implementations30 Jul 2014 Miroslav Dudík, Rafael Frongillo, Jennifer Wortman Vaughan

We study information elicitation in cost-function-based combinatorial prediction markets when the market maker's utility for information decreases over time.

Cannot find the paper you are looking for? You can Submit a new open access paper.