Search Results for author: Margaret Mitchell

Found 39 papers, 8 papers with code

Measuring Model Biases in the Absence of Ground Truth

no code implementations5 Mar 2021 Osman Aka, Ken Burke, Alex Bäuerle, Christina Greer, Margaret Mitchell

By treating a classification model's predictions for a given image as a set of labels analogous to a bag of words, we rank the biases that a model has learned with respect to different identity labels.

Fairness Image Classification

White Paper - Creating a Repository of Objectionable Online Content: Addressing Undesirable Biases and Ethical Considerations

no code implementations23 Feb 2021 Thamar Solorio, Mahsa Shafaei, Christos Smailis, Isabelle Augenstein, Margaret Mitchell, Ingrid Stapf, Ioannis Kakadiaris

This white paper summarizes the authors' structured brainstorming regarding ethical considerations for creating an extensive repository of online content labeled with tags that describe potentially questionable content for young viewers.

Diversity and Inclusion Metrics in Subset Selection

no code implementations9 Feb 2020 Margaret Mitchell, Dylan Baker, Nyalleng Moorosi, Emily Denton, Ben Hutchinson, Alex Hanna, Timnit Gebru, Jamie Morgenstern

The ethical concept of fairness has recently been applied in machine learning (ML) settings to describe a wide range of constraints and objectives.

Fairness

Saving Face: Investigating the Ethical Concerns of Facial Recognition Auditing

no code implementations3 Jan 2020 Inioluwa Deborah Raji, Timnit Gebru, Margaret Mitchell, Joy Buolamwini, Joonseok Lee, Emily Denton

Although essential to revealing biased performance, well intentioned attempts at algorithmic auditing can have effects that may harm the very populations these measures are meant to protect.

Computers and Society

Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing

no code implementations3 Jan 2020 Inioluwa Deborah Raji, Andrew Smart, Rebecca N. White, Margaret Mitchell, Timnit Gebru, Ben Hutchinson, Jamila Smith-Loud, Daniel Theron, Parker Barnes

Rising concern for the societal implications of artificial intelligence systems has inspired a wave of academic and journalistic literature in which deployed systems are audited for harm by investigators from outside the organizations deploying the algorithms.

Computers and Society

Perturbation Sensitivity Analysis to Detect Unintended Model Biases

no code implementations IJCNLP 2019 Vinodkumar Prabhakaran, Ben Hutchinson, Margaret Mitchell

Data-driven statistical Natural Language Processing (NLP) techniques leverage large amounts of language data to build models that can understand language.

Sentiment Analysis

Image Counterfactual Sensitivity Analysis for Detecting Unintended Bias

no code implementations14 Jun 2019 Emily Denton, Ben Hutchinson, Margaret Mitchell, Timnit Gebru, Andrew Zaldivar

Facial analysis models are increasingly used in applications that have serious impacts on people's lives, ranging from authentication to surveillance tracking.

Fairness

50 Years of Test (Un)fairness: Lessons for Machine Learning

no code implementations25 Nov 2018 Ben Hutchinson, Margaret Mitchell

We trace how the notion of fairness has been defined within the testing communities of education and hiring over the past half century, exploring the cultural and social context in which different fairness definitions have emerged.

Fairness

Model Cards for Model Reporting

8 code implementations5 Oct 2018 Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, Timnit Gebru

Model cards also disclose the context in which models are intended to be used, details of the performance evaluation procedures, and other relevant information.

Mitigating Unwanted Biases with Adversarial Learning

1 code implementation22 Jan 2018 Brian Hu Zhang, Blake Lemoine, Margaret Mitchell

Machine learning is a tool for building models that accurately represent input training data.

Fairness General Classification

Multi-Task Learning for Mental Health using Social Media Text

no code implementations10 Dec 2017 Adrian Benton, Margaret Mitchell, Dirk Hovy

We introduce initial groundwork for estimating suicide risk and mental health in a deep learning framework.

Gender Prediction Multi-Task Learning

InclusiveFaceNet: Improving Face Attribute Detection with Race and Gender Diversity

1 code implementation1 Dec 2017 Hee Jung Ryu, Hartwig Adam, Margaret Mitchell

We demonstrate an approach to face attribute detection that retains or improves attribute detection accuracy across gender and race subgroups by learning demographic information prior to learning the attribute detection task.

Memory-augmented Attention Modelling for Videos

1 code implementation7 Nov 2016 Rasool Fakoor, Abdel-rahman Mohamed, Margaret Mitchell, Sing Bing Kang, Pushmeet Kohli

We present a method to improve video description generation by modeling higher-order interactions between video frames and described concepts.

Video Description

Generating Natural Questions About an Image

1 code implementation ACL 2016 Nasrin Mostafazadeh, Ishan Misra, Jacob Devlin, Margaret Mitchell, Xiaodong He, Lucy Vanderwende

There has been an explosion of work in the vision & language community during the past few years from image captioning to video transcription, and answering questions about images.

Image Captioning Question Generation

Seeing through the Human Reporting Bias: Visual Classifiers from Noisy Human-Centric Labels

no code implementations CVPR 2016 Ishan Misra, C. Lawrence Zitnick, Margaret Mitchell, Ross Girshick

When human annotators are given a choice about what to label in an image, they apply their own subjective judgments on what to ignore and what to mention.

Image Captioning Image Classification

deltaBLEU: A Discriminative Metric for Generation Tasks with Intrinsically Diverse Targets

no code implementations IJCNLP 2015 Michel Galley, Chris Brockett, Alessandro Sordoni, Yangfeng Ji, Michael Auli, Chris Quirk, Margaret Mitchell, Jianfeng Gao, Bill Dolan

We introduce Discriminative BLEU (deltaBLEU), a novel metric for intrinsic evaluation of generated text in tasks that admit a diverse range of possible outputs.

Cannot find the paper you are looking for? You can Submit a new open access paper.