Search Results for author: Su Lin Blodgett

Found 23 papers, 4 papers with code

Language (Technology) is Power: A Critical Survey of "Bias" in NLP

1 code implementation28 May 2020 Su Lin Blodgett, Solon Barocas, Hal Daumé III, Hanna Wallach

We survey 146 papers analyzing "bias" in NLP systems, finding that their motivations are often vague, inconsistent, and lacking in normative reasoning, despite the fact that analyzing "bias" is an inherently normative process.

Monte Carlo Syntax Marginals for Exploring and Using Dependency Parses

1 code implementation NAACL 2018 Katherine A. Keith, Su Lin Blodgett, Brendan O'Connor

Dependency parsing research, which has made significant gains in recent years, typically focuses on improving the accuracy of single-tree predictions.

Dependency Parsing Sentence

Racial Disparity in Natural Language Processing: A Case Study of Social Media African-American English

no code implementations30 Jun 2017 Su Lin Blodgett, Brendan O'Connor

We highlight an important frontier in algorithmic fairness: disparity in the quality of natural language processing algorithms when applied to language from authors of different social groups.

Fairness Language Identification

Visualizing textual models with in-text and word-as-pixel highlighting

no code implementations20 Jun 2016 Abram Handler, Su Lin Blodgett, Brendan O'Connor

We explore two techniques which use color to make sense of statistical text models.

Twitter Universal Dependency Parsing for African-American and Mainstream American English

no code implementations ACL 2018 Su Lin Blodgett, Johnny Wei, Brendan O{'}Connor

Due to the presence of both Twitter-specific conventions and non-standard and dialectal language, Twitter presents a significant parsing challenge to current dependency parsing tools.

Dependency Parsing Information Retrieval +3

A Dataset and Classifier for Recognizing Social Media English

no code implementations WS 2017 Su Lin Blodgett, Johnny Wei, Brendan O{'}Connor

While language identification works well on standard texts, it performs much worse on social media language, in particular dialectal language{---}even for English.

Language Identification Language Modelling

Language (Technology) is Power: A Critical Survey of ``Bias'' in NLP

no code implementations ACL 2020 Su Lin Blodgett, Solon Barocas, Hal Daum{\'e} III, Hanna Wallach

We survey 146 papers analyzing {``}bias{''} in NLP systems, finding that their motivations are often vague, inconsistent, and lacking in normative reasoning, despite the fact that analyzing {``}bias{''} is an inherently normative process.

How to Write a Bias Statement: Recommendations for Submissions to the Workshop on Gender Bias in NLP

no code implementations7 Apr 2021 Christian Hardmeier, Marta R. Costa-jussà, Kellie Webster, Will Radford, Su Lin Blodgett

At the Workshop on Gender Bias in NLP (GeBNLP), we'd like to encourage authors to give explicit consideration to the wider aspects of bias and its social implications.

Beyond "Fairness:" Structural (In)justice Lenses on AI for Education

no code implementations18 May 2021 Michael Madaio, Su Lin Blodgett, Elijah Mayfield, Ezekiel Dixon-Román

Educational technologies, and the systems of schooling in which they are deployed, enact particular ideologies about what is important to know and how learners should learn.

Fairness

A Survey of Race, Racism, and Anti-Racism in NLP

no code implementations ACL 2021 Anjalie Field, Su Lin Blodgett, Zeerak Waseem, Yulia Tsvetkov

Despite inextricable ties between race and language, little work has considered race in NLP research and development.

Risks of AI Foundation Models in Education

no code implementations19 Oct 2021 Su Lin Blodgett, Michael Madaio

If the authors of a recent Stanford report (Bommasani et al., 2021) on the opportunities and risks of "foundation models" are to be believed, these models represent a paradigm shift for AI and for the domains in which they will supposedly be used, including education.

Examining Political Rhetoric with Epistemic Stance Detection

1 code implementation29 Dec 2022 Ankita Gupta, Su Lin Blodgett, Justin H Gross, Brendan O'Connor

Participants in political discourse employ rhetorical strategies -- such as hedging, attributions, or denials -- to display varying degrees of belief commitments to claims proposed by themselves or others.

Stance Detection

Fairness and Sequential Decision Making: Limits, Lessons, and Opportunities

no code implementations13 Jan 2023 Samer B. Nashed, Justin Svegliato, Su Lin Blodgett

As automated decision making and decision assistance systems become common in everyday life, research on the prevention or mitigation of potential harms that arise from decisions made by these systems has proliferated.

Decision Making Fairness

It Takes Two to Tango: Navigating Conceptualizations of NLP Tasks and Measurements of Performance

no code implementations15 May 2023 Arjun Subramonian, Xingdi Yuan, Hal Daumé III, Su Lin Blodgett

Progress in NLP is increasingly measured through benchmarks; hence, contextualizing progress requires understanding when and why practitioners may disagree about the validity of benchmarks.

coreference-resolution Question Answering

This Prompt is Measuring <MASK>: Evaluating Bias Evaluation in Language Models

no code implementations22 May 2023 Seraphina Goldfarb-Tarrant, Eddie Ungless, Esma Balkir, Su Lin Blodgett

Bias research in NLP seeks to analyse models for social biases, thus helping NLP practitioners uncover, measure, and mitigate social harms.

Experimental Design

Evaluating the Social Impact of Generative AI Systems in Systems and Society

no code implementations9 Jun 2023 Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, Apostol Vassilev

We move toward a standard approach in evaluating a generative AI system for any modality, in two overarching categories: what is able to be evaluated in a base system that has no predetermined application and what is able to be evaluated in society.

"One-size-fits-all"? Observations and Expectations of NLG Systems Across Identity-Related Language Features

no code implementations23 Oct 2023 Li Lucy, Su Lin Blodgett, Milad Shokouhi, Hanna Wallach, Alexandra Olteanu

Fairness-related assumptions about what constitutes appropriate NLG system behaviors range from invariance, where systems are expected to respond identically to social groups, to adaptation, where responses should instead vary across them.

Fairness

Responsible AI Considerations in Text Summarization Research: A Review of Current Practices

no code implementations18 Nov 2023 Yu Lu Liu, Meng Cao, Su Lin Blodgett, Jackie Chi Kit Cheung, Alexandra Olteanu, Adam Trischler

We focus on how, which, and when responsible AI issues are covered, which relevant stakeholders are considered, and mismatches between stated and realized research goals.

Text Summarization

Measuring machine learning harms from stereotypes: requires understanding who is being harmed by which errors in what ways

no code implementations6 Feb 2024 Angelina Wang, Xuechunzi Bai, Solon Barocas, Su Lin Blodgett

However, certain stereotype-violating errors are more experientially harmful for men, potentially due to perceived threats to masculinity.

Fairness Image Retrieval

Cannot find the paper you are looking for? You can Submit a new open access paper.