AI explanations are often mentioned as a way to improve human-AI decision-making.
no code implementations • 22 Nov 2022 • Charvi Rastogi, Ivan Stelmakh, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, Jennifer Wortman Vaughan, Zhenyu Xue, Hal Daumé III, Emma Pierson, Nihar B. Shah
In a top-tier computer science conference (NeurIPS 2021) with more than 23, 000 submitting authors and 9, 000 submitted papers, we survey the authors on three questions: (i) their predicted probability of acceptance for each of their papers, (ii) their perceived ranking of their own papers based on scientific contribution, and (iii) the change in their perception about their own papers after seeing the reviews.
We propose a method to identify and characterize distribution shifts in classification datasets based on optimal transport.
Machine learning (ML) interpretability techniques can reveal undesirable patterns in data that models exploit to make predictions--potentially causing harms once deployed.
Despite the fact that data documentation frameworks are often motivated from the perspective of responsible AI, participants did not make the connection between the questions that they were asked to answer and their responsible AI implications.
Transparency around limitations can improve the scientific rigor of research, help ensure appropriate interpretation of research findings, and make research claims more credible.
Various tools and practices have been developed to support practitioners in identifying, assessing, and mitigating fairness-related harms caused by AI systems.
Recent strides in interpretable machine learning (ML) research reveal that models exploit undesirable patterns in the data to make predictions, which potentially causes harms in deployment.
We take inspiration from the study of human explanation to inform the design and evaluation of interpretability methods in machine learning.
Disaggregated evaluations of AI systems, in which system performance is assessed and reported separately for different groups of people, are conceptually simple.
Online learning algorithms, widely used to power search and content optimization on the web, must balance exploration and exploitation, potentially sacrificing the experience of current users in order to gain information that will lead to better decisions in the future.
Interpretability is an elusive but highly sought-after characteristic of modern machine learning methods.
AI technologies have the potential to dramatically impact the lives of people with disabilities (PWD).
The potential for machine learning (ML) systems to amplify social inequities and unfairness is receiving increasing popular and academic attention.
When consequential decisions are informed by algorithmic input, individuals may feel compelled to alter their behavior in order to gain a system's approval.
The lack of comprehensive, high-quality health data in developing nations creates a roadblock for combating the impacts of disease.
Returning to group-level effects, we show that under the same conditions, negative group externalities essentially vanish under the greedy algorithm.
The machine learning community currently has no standardized process for documenting datasets, which can lead to severe consequences in high-stakes domains.
With machine learning models being increasingly used to aid decision making even in high-stakes domains, there has been a growing interest in developing interpretable models.
Over the last decade, crowdsourcing has been used to harness the power of human computation to solve tasks that are notoriously difficult to solve with computers alone, such as determining whether or not an image contains a tree, rating the relevance of a website, or verifying the phone number of a business.
We consider the design of computationally efficient online learning algorithms in an adversarial setting in which the learner has access to an offline optimization oracle.
We consider the design of private prediction markets, financial markets designed to elicit predictions about uncertain events without revealing too much information about market participants' actions or beliefs.
We study information elicitation in cost-function-based combinatorial prediction markets when the market maker's utility for information decreases over time.
In this paper, we study the requester's problem of dynamically adjusting quality-contingent payments for tasks.