Search Results for author: Vinodkumar Prabhakaran

Found 56 papers, 4 papers with code

Online Abuse and Human Rights: WOAH Satellite Session at RightsCon 2020

no code implementations EMNLP (ALW) 2020 Vinodkumar Prabhakaran, Zeerak Waseem, Seyi Akiwowo, Bertie Vidgen

In 2020 The Workshop on Online Abuse and Harms (WOAH) held a satellite panel at RightsCons 2020, an international human rights conference.

ViSAGe: A Global-Scale Analysis of Visual Stereotypes in Text-to-Image Generation

no code implementations12 Jan 2024 Akshita Jha, Vinodkumar Prabhakaran, Remi Denton, Sarah Laszlo, Shachi Dave, Rida Qadri, Chandan K. Reddy, Sunipa Dev

First, we show that stereotypical attributes in ViSAGe are thrice as likely to be present in generated images of corresponding identities as compared to other attributes, and that the offensiveness of these depictions is especially higher for identities from Africa, South America, and South East Asia.

Text-to-Image Generation

Disentangling Perceptions of Offensiveness: Cultural and Moral Correlates

no code implementations11 Dec 2023 Aida Davani, Mark Díaz, Dylan Baker, Vinodkumar Prabhakaran

More importantly, we find that individual moral values play a crucial role in shaping these variations: moral concerns about Care and Purity are significant mediating factors driving cross-cultural differences.

SoUnD Framework: Analyzing (So)cial Representation in (Un)structured (D)ata

no code implementations28 Nov 2023 Mark Díaz, Sunipa Dev, Emily Reif, Emily Denton, Vinodkumar Prabhakaran

The unstructured nature of data used in foundation model development is a challenge to systematic analyses for making data use and documentation decisions.

A Framework to Assess (Dis)agreement Among Diverse Rater Groups

no code implementations9 Nov 2023 Vinodkumar Prabhakaran, Christopher Homan, Lora Aroyo, Alicia Parrish, Alex Taylor, Mark Díaz, Ding Wang

Recent advancements in conversational AI have created an urgent need for safety guardrails that prevent users from being exposed to offensive and dangerous content.


MD3: The Multi-Dialect Dataset of Dialogues

no code implementations19 May 2023 Jacob Eisenstein, Vinodkumar Prabhakaran, Clara Rivera, Dorottya Demszky, Devyani Sharma

We introduce a new dataset of conversational speech representing English from India, Nigeria, and the United States.

SeeGULL: A Stereotype Benchmark with Broad Geo-Cultural Coverage Leveraging Generative Models

1 code implementation19 May 2023 Akshita Jha, Aida Davani, Chandan K. Reddy, Shachi Dave, Vinodkumar Prabhakaran, Sunipa Dev

Stereotype benchmark datasets are crucial to detect and mitigate social stereotypes about groups of people in NLP models.

Cultural Incongruencies in Artificial Intelligence

no code implementations19 Nov 2022 Vinodkumar Prabhakaran, Rida Qadri, Ben Hutchinson

Artificial intelligence (AI) systems attempt to imitate human behavior.


Underspecification in Scene Description-to-Depiction Tasks

no code implementations11 Oct 2022 Ben Hutchinson, Jason Baldridge, Vinodkumar Prabhakaran

Questions regarding implicitness, ambiguity and underspecification are crucial for understanding the task validity and ethical concerns of multimodal image+text systems, yet have received little attention to date.


A Human Rights-Based Approach to Responsible AI

no code implementations6 Oct 2022 Vinodkumar Prabhakaran, Margaret Mitchell, Timnit Gebru, Iason Gabriel

Research on fairness, accountability, transparency and ethics of AI-based interventions in society has gained much-needed momentum in recent years.

Ethics Fairness

Evaluation Gaps in Machine Learning Practice

no code implementations11 May 2022 Ben Hutchinson, Negar Rostamzadeh, Christina Greer, Katherine Heller, Vinodkumar Prabhakaran

Forming a reliable judgement of a machine learning (ML) model's appropriateness for an application ecosystem is critical for its responsible use, and requires considering a broad range of factors including harms, benefits, and responsibilities.

BIG-bench Machine Learning

Thinking Beyond Distributions in Testing Machine Learned Models

no code implementations6 Dec 2021 Negar Rostamzadeh, Ben Hutchinson, Christina Greer, Vinodkumar Prabhakaran

Testing practices within the machine learning (ML) community have centered around assessing a learned model's predictive performance measured against a test dataset, often drawn from the same distribution as the training dataset.

BIG-bench Machine Learning Fairness

On Releasing Annotator-Level Labels and Information in Datasets

no code implementations EMNLP (LAW, DMR) 2021 Vinodkumar Prabhakaran, Aida Mostafazadeh Davani, Mark Díaz

A common practice in building NLP datasets, especially using crowd-sourced annotations, involves obtaining multiple annotator judgements on the same data instances, which are then flattened to produce a single "ground truth" label or score, through majority voting, averaging, or adjudication.

Dealing with Disagreements: Looking Beyond the Majority Vote in Subjective Annotations

no code implementations12 Oct 2021 Aida Mostafazadeh Davani, Mark Díaz, Vinodkumar Prabhakaran

Majority voting and averaging are common approaches employed to resolve annotator disagreements and derive single ground truth labels from multiple annotations.

Binary Classification

Detecting Cross-Geographic Biases in Toxicity Modeling on Social Media

no code implementations WNUT (ACL) 2021 Sayan Ghosh, Dylan Baker, David Jurgens, Vinodkumar Prabhakaran

Online social media platforms increasingly rely on Natural Language Processing (NLP) techniques to detect abusive content at scale in order to mitigate the harms it causes to their users.

Bias Detection

Re-imagining Algorithmic Fairness in India and Beyond

no code implementations25 Jan 2021 Nithya Sambasivan, Erin Arnesen, Ben Hutchinson, Tulsee Doshi, Vinodkumar Prabhakaran

Instead, we re-imagine algorithmic fairness in India and provide a roadmap to re-contextualise data and models, empower oppressed communities, and enable Fair-ML ecosystems.


Non-portability of Algorithmic Fairness in India

no code implementations3 Dec 2020 Nithya Sambasivan, Erin Arnesen, Ben Hutchinson, Vinodkumar Prabhakaran

Conventional algorithmic fairness is Western in its sub-groups, values, and optimizations.

Fairness Translation

Learning to Recognize Dialect Features

no code implementations NAACL 2021 Dorottya Demszky, Devyani Sharma, Jonathan H. Clark, Vinodkumar Prabhakaran, Jacob Eisenstein

Evaluation on a test set of 22 dialect features of Indian English demonstrates that these models learn to recognize many features with high accuracy, and that a few minimal pairs can be as effective for training as thousands of labeled examples.

Extending the Machine Learning Abstraction Boundary: A Complex Systems Approach to Incorporate Societal Context

no code implementations17 Jun 2020 Donald Martin Jr., Vinodkumar Prabhakaran, Jill Kuhlberg, Andrew Smart, William S. Isaac

Machine learning (ML) fairness research tends to focus primarily on mathematically-based interventions on often opaque algorithms or models and/or their immediate inputs and outputs.

BIG-bench Machine Learning Fairness

Participatory Problem Formulation for Fairer Machine Learning Through Community Based System Dynamics

no code implementations15 May 2020 Donald Martin Jr., Vinodkumar Prabhakaran, Jill Kuhlberg, Andrew Smart, William S. Isaac

Recent research on algorithmic fairness has highlighted that the problem formulation phase of ML system development can be a key source of bias that has significant downstream impacts on ML system fairness outcomes.

BIG-bench Machine Learning Fairness

Social Biases in NLP Models as Barriers for Persons with Disabilities

no code implementations ACL 2020 Ben Hutchinson, Vinodkumar Prabhakaran, Emily Denton, Kellie Webster, Yu Zhong, Stephen Denuyl

Building equitable and inclusive NLP technologies demands consideration of whether and how social attitudes are represented in ML models.

Sentiment Analysis

Perturbation Sensitivity Analysis to Detect Unintended Model Biases

no code implementations IJCNLP 2019 Vinodkumar Prabhakaran, Ben Hutchinson, Margaret Mitchell

Data-driven statistical Natural Language Processing (NLP) techniques leverage large amounts of language data to build models that can understand language.

Sentiment Analysis

Power Networks: A Novel Neural Architecture to Predict Power Relations

no code implementations COLING 2018 Michelle Lam, Catherina Xu, Angela Kong, Vinodkumar Prabhakaran

Can language analysis reveal the underlying social power relations that exist between participants of an interaction?

Socially Responsible NLP

no code implementations NAACL 2018 Yulia Tsvetkov, Vinodkumar Prabhakaran, Rob Voigt

As language technologies have become increasingly prevalent, there is a growing awareness that decisions we make about our data, methods, and tools are often tied up with their impact on people and societies.

Decision Making Ethics

Author Commitment and Social Power: Automatic Belief Tagging to Infer the Social Context of Interactions

no code implementations NAACL 2018 Vinodkumar Prabhakaran, Premkumar Ganeshkumar, Owen Rambow

Understanding how social power structures affect the way we interact with one another is of great interest to social scientists who want to answer fundamental questions about human behavior, as well as to computer scientists who want to build automatic methods to infer the social contexts of interactions.


Detecting Institutional Dialog Acts in Police Traffic Stops

no code implementations TACL 2018 Vinodkumar Prabhakaran, Camilla Griffiths, Hang Su, Prateek Verma, Nelson Morgan, Jennifer L. Eberhardt, Dan Jurafsky

We apply computational dialog methods to police body-worn camera footage to model conversations between police officers and community members in traffic stops.

speech-recognition Speech Recognition

Dialog Structure Through the Lens of Gender, Gender Environment, and Power

no code implementations12 Jun 2017 Vinodkumar Prabhakaran, Owen Rambow

In this paper, we study the interaction of power, gender, and dialog behavior in organizational interactions.

Computational Argumentation Quality Assessment in Natural Language

no code implementations EACL 2017 Henning Wachsmuth, Nona Naderi, Yufang Hou, Yonatan Bilu, Vinodkumar Prabhakaran, Tim Alberdingk Thijm, Graeme Hirst, Benno Stein

Research on computational argumentation faces the problem of how to automatically assess the quality of an argument or argumentation.

A Corpus of Wikipedia Discussions: Over the Years, with Topic, Power and Gender Labels

no code implementations LREC 2016 Vinodkumar Prabhakaran, Owen Rambow

In order to gain a deep understanding of how social context manifests in interactions, we need data that represents interactions from a large community of people over a long period of time, capturing different aspects of social context.

Annotations for Power Relations on Email Threads

no code implementations LREC 2012 Vinodkumar Prabhakaran, Huzaifa Neralwala, Owen Rambow, Mona Diab

In this paper, we describe a multi-layer annotation scheme for social power relations that are recognizable from online written interactions.

Cannot find the paper you are looking for? You can Submit a new open access paper.