Search Results for author: Kathleen C. Fraser

Found 27 papers, 5 papers with code

Extracting Age-Related Stereotypes from Social Media Texts

no code implementations LREC 2022 Kathleen C. Fraser, Svetlana Kiritchenko, Isar Nejadgholi

Age-related stereotypes are pervasive in our society, and yet have been under-studied in the NLP community.

Does Moral Code have a Moral Code? Probing Delphi’s Moral Philosophy

no code implementations NAACL (TrustNLP) 2022 Kathleen C. Fraser, Svetlana Kiritchenko, Esma Balkir

In an effort to guarantee that machine learning model outputs conform with human moral values, recent work has begun exploring the possibility of explicitly training models to learn the difference between right and wrong.

Philosophy

Challenging Negative Gender Stereotypes: A Study on the Effectiveness of Automated Counter-Stereotypes

no code implementations18 Apr 2024 Isar Nejadgholi, Kathleen C. Fraser, Anna Kerkhof, Svetlana Kiritchenko

The strategies of counter-facts and broadening universals (i. e., stating that anyone can have a trait regardless of group membership) emerged as the most robust approaches, while humour, perspective-taking, counter-examples, and empathy for the speaker were perceived as less effective.

Uncovering Bias in Large Vision-Language Models with Counterfactuals

no code implementations29 Mar 2024 Phillip Howard, Anahita Bhiwandiwalla, Kathleen C. Fraser, Svetlana Kiritchenko

We comprehensively evaluate the text produced by different LVLMs under this counterfactual generation setting and find that social attributes such as race, gender, and physical characteristics depicted in input images can significantly influence toxicity and the generation of competency-associated words.

counterfactual Question Answering +1

Examining Gender and Racial Bias in Large Vision-Language Models Using a Novel Dataset of Parallel Images

1 code implementation8 Feb 2024 Kathleen C. Fraser, Svetlana Kiritchenko

Following on recent advances in large language models (LLMs) and subsequent chat models, a new wave of large vision-language models (LVLMs) has emerged.

Image Captioning Question Answering +2

Concept-Based Explanations to Test for False Causal Relationships Learned by Abusive Language Classifiers

1 code implementation4 Jul 2023 Isar Nejadgholi, Svetlana Kiritchenko, Kathleen C. Fraser, Esma Balkir

Classifiers tend to learn a false causal relationship between an over-represented concept and a label, which can result in over-reliance on the concept and compromised classification accuracy.

Abusive Language

The crime of being poor

no code implementations24 Mar 2023 Georgina Curto, Svetlana Kiritchenko, Isar Nejadgholi, Kathleen C. Fraser

The criminalization of poverty has been widely denounced as a collective bias against the most vulnerable.

A Friendly Face: Do Text-to-Image Systems Rely on Stereotypes when the Input is Under-Specified?

no code implementations14 Feb 2023 Kathleen C. Fraser, Svetlana Kiritchenko, Isar Nejadgholi

As text-to-image systems continue to grow in popularity with the general public, questions have arisen about bias and diversity in the generated images.

Towards Procedural Fairness: Uncovering Biases in How a Toxic Language Classifier Uses Sentiment Information

1 code implementation19 Oct 2022 Isar Nejadgholi, Esma Balkir, Kathleen C. Fraser, Svetlana Kiritchenko

For a multi-class toxic language classifier, we leverage a concept-based explanation framework to calculate the sensitivity of the model to the concept of sentiment, which has been used before as a salient feature for toxic language detection.

Fairness

Challenges in Applying Explainability Methods to Improve the Fairness of NLP Models

no code implementations NAACL (TrustNLP) 2022 Esma Balkir, Svetlana Kiritchenko, Isar Nejadgholi, Kathleen C. Fraser

In this paper, we briefly review trends in explainability and fairness in NLP research, identify the current practices in which explainability methods are applied to detect and mitigate bias, and investigate the barriers preventing XAI methods from being used more widely in tackling fairness issues.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +1

Does Moral Code Have a Moral Code? Probing Delphi's Moral Philosophy

no code implementations25 May 2022 Kathleen C. Fraser, Svetlana Kiritchenko, Esma Balkir

In an effort to guarantee that machine learning model outputs conform with human moral values, recent work has begun exploring the possibility of explicitly training models to learn the difference between right and wrong.

Philosophy

Improving Generalizability in Implicitly Abusive Language Detection with Concept Activation Vectors

1 code implementation ACL 2022 Isar Nejadgholi, Kathleen C. Fraser, Svetlana Kiritchenko

Robustness of machine learning models on ever-changing real-world data is critical, especially for applications affecting human well-being such as content moderation.

Abuse Detection Abusive Language

Measuring Cognitive Status from Speech in a Smart Home Environment

no code implementations18 Oct 2021 Kathleen C. Fraser, Majid Komeili

We then present an overview of the preliminary results from pilot studies on active and passive smart home speech sensing for the measurement of cognitive health, and conclude with some recommendations and challenge statements for the next wave of work in this area, to help overcome both technical and ethical barriers to success.

Understanding and Countering Stereotypes: A Computational Approach to the Stereotype Content Model

no code implementations ACL 2021 Kathleen C. Fraser, Isar Nejadgholi, Svetlana Kiritchenko

In this work, we present a computational approach to interpreting stereotypes in text through the Stereotype Content Model (SCM), a comprehensive causal theory from social psychology.

Extensive Error Analysis and a Learning-Based Evaluation of Medical Entity Recognition Systems to Approximate User Experience

no code implementations WS 2020 Isar Nejadgholi, Kathleen C. Fraser, Berry de Bruijn

When comparing entities extracted by a medical entity recognition system with gold standard annotations over a test set, two types of mismatches might occur, label mismatch or span mismatch.

Entity Extraction using GAN NER

Recognizing UMLS Semantic Types with Deep Learning

no code implementations WS 2019 Isar Nejadgholi, Kathleen C. Fraser, Berry De Bruijn, Muqun Li, Astha LaPlante, Khaldoun Zine El Abidine

While producing a state-of-the-art result for the i2b2 2010 task (F1 = 0. 90), our results on MedMentions are significantly lower (F1 = 0. 63), suggesting there is still plenty of opportunity for improvement on this new data.

Entity Linking Relation Extraction +2

Extracting UMLS Concepts from Medical Text Using General and Domain-Specific Deep Learning Models

no code implementations3 Oct 2019 Kathleen C. Fraser, Isar Nejadgholi, Berry de Bruijn, Muqun Li, Astha LaPlante, Khaldoun Zine El Abidine

While producing a state-of-the-art result for the i2b2 2010 task (F1 = 0. 90), our results on MedMentions are significantly lower (F1 = 0. 63), suggesting there is still plenty of opportunity for improvement on this new data.

Entity Linking Relation Extraction +2

Multilingual prediction of Alzheimer's disease through domain adaptation and concept-based language modelling

no code implementations NAACL 2019 Kathleen C. Fraser, Nicklas Linz, Bai Li, Kristina Lundholm Fors, Frank Rudzicz, Alex K{\"o}nig, ra, Alex, Jan ersson, Philippe Robert, Dimitrios Kokkinakis

There is growing evidence that changes in speech and language may be early markers of dementia, but much of the previous NLP work in this area has been limited by the size of the available datasets.

Domain Adaptation Language Modelling

An analysis of eye-movements during reading for the detection of mild cognitive impairment

no code implementations EMNLP 2017 Kathleen C. Fraser, Kristina Lundholm Fors, Dimitrios Kokkinakis, Arto Nordlund

We present a machine learning analysis of eye-tracking data for the detection of mild cognitive impairment, a decline in cognitive abilities that is associated with an increased risk of developing dementia.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.