Search Results for author: Vinodkumar Prabhakaran

Found 43 papers, 2 papers with code

Online Abuse and Human Rights: WOAH Satellite Session at RightsCon 2020

no code implementations EMNLP (ALW) 2020 Vinodkumar Prabhakaran, Zeerak Waseem, Seyi Akiwowo, Bertie Vidgen

In 2020 The Workshop on Online Abuse and Harms (WOAH) held a satellite panel at RightsCons 2020, an international human rights conference.

Natural Language Processing

Evaluation Gaps in Machine Learning Practice

no code implementations11 May 2022 Ben Hutchinson, Negar Rostamzadeh, Christina Greer, Katherine Heller, Vinodkumar Prabhakaran

Forming a reliable judgement of a machine learning (ML) model's appropriateness for an application ecosystem is critical for its responsible use, and requires considering a broad range of factors including harms, benefits, and responsibilities.

Natural Language Processing

Thinking Beyond Distributions in Testing Machine Learned Models

no code implementations6 Dec 2021 Negar Rostamzadeh, Ben Hutchinson, Christina Greer, Vinodkumar Prabhakaran

Testing practices within the machine learning (ML) community have centered around assessing a learned model's predictive performance measured against a test dataset, often drawn from the same distribution as the training dataset.

Fairness

Dealing with Disagreements: Looking Beyond the Majority Vote in Subjective Annotations

no code implementations12 Oct 2021 Aida Mostafazadeh Davani, Mark Díaz, Vinodkumar Prabhakaran

Majority voting and averaging are common approaches employed to resolve annotator disagreements and derive single ground truth labels from multiple annotations.

On Releasing Annotator-Level Labels and Information in Datasets

no code implementations EMNLP (LAW, DMR) 2021 Vinodkumar Prabhakaran, Aida Mostafazadeh Davani, Mark Díaz

A common practice in building NLP datasets, especially using crowd-sourced annotations, involves obtaining multiple annotator judgements on the same data instances, which are then flattened to produce a single "ground truth" label or score, through majority voting, averaging, or adjudication.

Detecting Cross-Geographic Biases in Toxicity Modeling on Social Media

no code implementations WNUT (ACL) 2021 Sayan Ghosh, Dylan Baker, David Jurgens, Vinodkumar Prabhakaran

Online social media platforms increasingly rely on Natural Language Processing (NLP) techniques to detect abusive content at scale in order to mitigate the harms it causes to their users.

Bias Detection Natural Language Processing

Re-imagining Algorithmic Fairness in India and Beyond

no code implementations25 Jan 2021 Nithya Sambasivan, Erin Arnesen, Ben Hutchinson, Tulsee Doshi, Vinodkumar Prabhakaran

Instead, we re-imagine algorithmic fairness in India and provide a roadmap to re-contextualise data and models, empower oppressed communities, and enable Fair-ML ecosystems.

Fairness

Non-portability of Algorithmic Fairness in India

no code implementations3 Dec 2020 Nithya Sambasivan, Erin Arnesen, Ben Hutchinson, Vinodkumar Prabhakaran

Conventional algorithmic fairness is Western in its sub-groups, values, and optimizations.

Fairness Translation

Learning to Recognize Dialect Features

no code implementations NAACL 2021 Dorottya Demszky, Devyani Sharma, Jonathan H. Clark, Vinodkumar Prabhakaran, Jacob Eisenstein

Evaluation on a test set of 22 dialect features of Indian English demonstrates that these models learn to recognize many features with high accuracy, and that a few minimal pairs can be as effective for training as thousands of labeled examples.

Extending the Machine Learning Abstraction Boundary: A Complex Systems Approach to Incorporate Societal Context

no code implementations17 Jun 2020 Donald Martin Jr., Vinodkumar Prabhakaran, Jill Kuhlberg, Andrew Smart, William S. Isaac

Machine learning (ML) fairness research tends to focus primarily on mathematically-based interventions on often opaque algorithms or models and/or their immediate inputs and outputs.

Fairness

Participatory Problem Formulation for Fairer Machine Learning Through Community Based System Dynamics

no code implementations15 May 2020 Donald Martin Jr., Vinodkumar Prabhakaran, Jill Kuhlberg, Andrew Smart, William S. Isaac

Recent research on algorithmic fairness has highlighted that the problem formulation phase of ML system development can be a key source of bias that has significant downstream impacts on ML system fairness outcomes.

Fairness

Social Biases in NLP Models as Barriers for Persons with Disabilities

no code implementations ACL 2020 Ben Hutchinson, Vinodkumar Prabhakaran, Emily Denton, Kellie Webster, Yu Zhong, Stephen Denuyl

Building equitable and inclusive NLP technologies demands consideration of whether and how social attitudes are represented in ML models.

Sentiment Analysis

Perturbation Sensitivity Analysis to Detect Unintended Model Biases

no code implementations IJCNLP 2019 Vinodkumar Prabhakaran, Ben Hutchinson, Margaret Mitchell

Data-driven statistical Natural Language Processing (NLP) techniques leverage large amounts of language data to build models that can understand language.

Natural Language Processing Sentiment Analysis

Power Networks: A Novel Neural Architecture to Predict Power Relations

no code implementations COLING 2018 Michelle Lam, Catherina Xu, Angela Kong, Vinodkumar Prabhakaran

Can language analysis reveal the underlying social power relations that exist between participants of an interaction?

Socially Responsible NLP

no code implementations NAACL 2018 Yulia Tsvetkov, Vinodkumar Prabhakaran, Rob Voigt

As language technologies have become increasingly prevalent, there is a growing awareness that decisions we make about our data, methods, and tools are often tied up with their impact on people and societies.

Decision Making

Author Commitment and Social Power: Automatic Belief Tagging to Infer the Social Context of Interactions

no code implementations NAACL 2018 Vinodkumar Prabhakaran, Premkumar Ganeshkumar, Owen Rambow

Understanding how social power structures affect the way we interact with one another is of great interest to social scientists who want to answer fundamental questions about human behavior, as well as to computer scientists who want to build automatic methods to infer the social contexts of interactions.

Detecting Institutional Dialog Acts in Police Traffic Stops

no code implementations TACL 2018 Vinodkumar Prabhakaran, Camilla Griffiths, Hang Su, Prateek Verma, Nelson Morgan, Jennifer L. Eberhardt, Dan Jurafsky

We apply computational dialog methods to police body-worn camera footage to model conversations between police officers and community members in traffic stops.

Speech Recognition

Dialog Structure Through the Lens of Gender, Gender Environment, and Power

no code implementations12 Jun 2017 Vinodkumar Prabhakaran, Owen Rambow

In this paper, we study the interaction of power, gender, and dialog behavior in organizational interactions.

A Corpus of Wikipedia Discussions: Over the Years, with Topic, Power and Gender Labels

no code implementations LREC 2016 Vinodkumar Prabhakaran, Owen Rambow

In order to gain a deep understanding of how social context manifests in interactions, we need data that represents interactions from a large community of people over a long period of time, capturing different aspects of social context.

Annotations for Power Relations on Email Threads

no code implementations LREC 2012 Vinodkumar Prabhakaran, Huzaifa Neralwala, Owen Rambow, Mona Diab

In this paper, we describe a multi-layer annotation scheme for social power relations that are recognizable from online written interactions.

Cannot find the paper you are looking for? You can Submit a new open access paper.