Search Results for author: Ben Hutchinson

Found 16 papers, 3 papers with code

Evaluation Gaps in Machine Learning Practice

no code implementations11 May 2022 Ben Hutchinson, Negar Rostamzadeh, Christina Greer, Katherine Heller, Vinodkumar Prabhakaran

Forming a reliable judgement of a machine learning (ML) model's appropriateness for an application ecosystem is critical for its responsible use, and requires considering a broad range of factors including harms, benefits, and responsibilities.

Thinking Beyond Distributions in Testing Machine Learned Models

no code implementations6 Dec 2021 Negar Rostamzadeh, Ben Hutchinson, Christina Greer, Vinodkumar Prabhakaran

Testing practices within the machine learning (ML) community have centered around assessing a learned model's predictive performance measured against a test dataset, often drawn from the same distribution as the training dataset.


Re-imagining Algorithmic Fairness in India and Beyond

no code implementations25 Jan 2021 Nithya Sambasivan, Erin Arnesen, Ben Hutchinson, Tulsee Doshi, Vinodkumar Prabhakaran

Instead, we re-imagine algorithmic fairness in India and provide a roadmap to re-contextualise data and models, empower oppressed communities, and enable Fair-ML ecosystems.


Non-portability of Algorithmic Fairness in India

no code implementations3 Dec 2020 Nithya Sambasivan, Erin Arnesen, Ben Hutchinson, Vinodkumar Prabhakaran

Conventional algorithmic fairness is Western in its sub-groups, values, and optimizations.

Fairness Translation

Social Biases in NLP Models as Barriers for Persons with Disabilities

no code implementations ACL 2020 Ben Hutchinson, Vinodkumar Prabhakaran, Emily Denton, Kellie Webster, Yu Zhong, Stephen Denuyl

Building equitable and inclusive NLP technologies demands consideration of whether and how social attitudes are represented in ML models.

Sentiment Analysis

Diversity and Inclusion Metrics in Subset Selection

no code implementations9 Feb 2020 Margaret Mitchell, Dylan Baker, Nyalleng Moorosi, Emily Denton, Ben Hutchinson, Alex Hanna, Timnit Gebru, Jamie Morgenstern

The ethical concept of fairness has recently been applied in machine learning (ML) settings to describe a wide range of constraints and objectives.


Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing

no code implementations3 Jan 2020 Inioluwa Deborah Raji, Andrew Smart, Rebecca N. White, Margaret Mitchell, Timnit Gebru, Ben Hutchinson, Jamila Smith-Loud, Daniel Theron, Parker Barnes

Rising concern for the societal implications of artificial intelligence systems has inspired a wave of academic and journalistic literature in which deployed systems are audited for harm by investigators from outside the organizations deploying the algorithms.

Computers and Society

Perturbation Sensitivity Analysis to Detect Unintended Model Biases

no code implementations IJCNLP 2019 Vinodkumar Prabhakaran, Ben Hutchinson, Margaret Mitchell

Data-driven statistical Natural Language Processing (NLP) techniques leverage large amounts of language data to build models that can understand language.

Sentiment Analysis

Image Counterfactual Sensitivity Analysis for Detecting Unintended Bias

no code implementations14 Jun 2019 Emily Denton, Ben Hutchinson, Margaret Mitchell, Timnit Gebru, Andrew Zaldivar

Facial analysis models are increasingly used in applications that have serious impacts on people's lives, ranging from authentication to surveillance tracking.


50 Years of Test (Un)fairness: Lessons for Machine Learning

no code implementations25 Nov 2018 Ben Hutchinson, Margaret Mitchell

We trace how the notion of fairness has been defined within the testing communities of education and hiring over the past half century, exploring the cultural and social context in which different fairness definitions have emerged.


Model Cards for Model Reporting

8 code implementations5 Oct 2018 Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, Timnit Gebru

Model cards also disclose the context in which models are intended to be used, details of the performance evaluation procedures, and other relevant information.

Cannot find the paper you are looking for? You can Submit a new open access paper.