Search Results for author: Esma Balkir

Found 11 papers, 3 papers with code

Does Moral Code have a Moral Code? Probing Delphi’s Moral Philosophy

no code implementations NAACL (TrustNLP) 2022 Kathleen C. Fraser, Svetlana Kiritchenko, Esma Balkir

In an effort to guarantee that machine learning model outputs conform with human moral values, recent work has begun exploring the possibility of explicitly training models to learn the difference between right and wrong.

Philosophy

Concept-Based Explanations to Test for False Causal Relationships Learned by Abusive Language Classifiers

1 code implementation4 Jul 2023 Isar Nejadgholi, Svetlana Kiritchenko, Kathleen C. Fraser, Esma Balkir

Classifiers tend to learn a false causal relationship between an over-represented concept and a label, which can result in over-reliance on the concept and compromised classification accuracy.

Abusive Language

This Prompt is Measuring <MASK>: Evaluating Bias Evaluation in Language Models

no code implementations22 May 2023 Seraphina Goldfarb-Tarrant, Eddie Ungless, Esma Balkir, Su Lin Blodgett

Bias research in NLP seeks to analyse models for social biases, thus helping NLP practitioners uncover, measure, and mitigate social harms.

Experimental Design

Towards Procedural Fairness: Uncovering Biases in How a Toxic Language Classifier Uses Sentiment Information

1 code implementation19 Oct 2022 Isar Nejadgholi, Esma Balkir, Kathleen C. Fraser, Svetlana Kiritchenko

For a multi-class toxic language classifier, we leverage a concept-based explanation framework to calculate the sensitivity of the model to the concept of sentiment, which has been used before as a salient feature for toxic language detection.

Fairness

Challenges in Applying Explainability Methods to Improve the Fairness of NLP Models

no code implementations NAACL (TrustNLP) 2022 Esma Balkir, Svetlana Kiritchenko, Isar Nejadgholi, Kathleen C. Fraser

In this paper, we briefly review trends in explainability and fairness in NLP research, identify the current practices in which explainability methods are applied to detect and mitigate bias, and investigate the barriers preventing XAI methods from being used more widely in tackling fairness issues.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +1

Does Moral Code Have a Moral Code? Probing Delphi's Moral Philosophy

no code implementations25 May 2022 Kathleen C. Fraser, Svetlana Kiritchenko, Esma Balkir

In an effort to guarantee that machine learning model outputs conform with human moral values, recent work has begun exploring the possibility of explicitly training models to learn the difference between right and wrong.

Philosophy

Tensors over Semirings for Latent-Variable Weighted Logic Programs

no code implementations WS 2020 Esma Balkir, Daniel Gildea, Shay Cohen

Semiring parsing is an elegant framework for describing parsers by using semiring weighted logic programs.

Sentence Entailment in Compositional Distributional Semantics

no code implementations14 Dec 2015 Esma Balkir, Dimitri Kartsaklis, Mehrnoosh Sadrzadeh

In categorical compositional distributional semantics, phrase and sentence representations are functions of their grammatical structure and representations of the words therein.

Sentence

Distributional Sentence Entailment Using Density Matrices

no code implementations22 Jun 2015 Esma Balkir, Mehrnoosh Sadrzadeh, Bob Coecke

Categorical compositional distributional model of Coecke et al. (2010) suggests a way to combine grammatical composition of the formal, type logical models with the corpus based, empirical word representations of distributional semantics.

Lexical Entailment Sentence

Cannot find the paper you are looking for? You can Submit a new open access paper.