no code implementations • LREC 2022 • Kathleen C. Fraser, Svetlana Kiritchenko, Isar Nejadgholi
Age-related stereotypes are pervasive in our society, and yet have been under-studied in the NLP community.
no code implementations • NAACL (TrustNLP) 2022 • Kathleen C. Fraser, Svetlana Kiritchenko, Esma Balkir
In an effort to guarantee that machine learning model outputs conform with human moral values, recent work has begun exploring the possibility of explicitly training models to learn the difference between right and wrong.
no code implementations • 18 Apr 2024 • Isar Nejadgholi, Kathleen C. Fraser, Anna Kerkhof, Svetlana Kiritchenko
The strategies of counter-facts and broadening universals (i. e., stating that anyone can have a trait regardless of group membership) emerged as the most robust approaches, while humour, perspective-taking, counter-examples, and empathy for the speaker were perceived as less effective.
no code implementations • 29 Mar 2024 • Phillip Howard, Anahita Bhiwandiwalla, Kathleen C. Fraser, Svetlana Kiritchenko
We comprehensively evaluate the text produced by different LVLMs under this counterfactual generation setting and find that social attributes such as race, gender, and physical characteristics depicted in input images can significantly influence toxicity and the generation of competency-associated words.
1 code implementation • 8 Feb 2024 • Kathleen C. Fraser, Svetlana Kiritchenko
Following on recent advances in large language models (LLMs) and subsequent chat models, a new wave of large vision-language models (LVLMs) has emerged.
1 code implementation • 4 Jul 2023 • Isar Nejadgholi, Svetlana Kiritchenko, Kathleen C. Fraser, Esma Balkir
Classifiers tend to learn a false causal relationship between an over-represented concept and a label, which can result in over-reliance on the concept and compromised classification accuracy.
no code implementations • 24 Mar 2023 • Georgina Curto, Svetlana Kiritchenko, Isar Nejadgholi, Kathleen C. Fraser
The criminalization of poverty has been widely denounced as a collective bias against the most vulnerable.
no code implementations • 14 Feb 2023 • Kathleen C. Fraser, Svetlana Kiritchenko, Isar Nejadgholi
As text-to-image systems continue to grow in popularity with the general public, questions have arisen about bias and diversity in the generated images.
1 code implementation • 19 Oct 2022 • Isar Nejadgholi, Esma Balkir, Kathleen C. Fraser, Svetlana Kiritchenko
For a multi-class toxic language classifier, we leverage a concept-based explanation framework to calculate the sensitivity of the model to the concept of sentiment, which has been used before as a salient feature for toxic language detection.
no code implementations • NAACL (TrustNLP) 2022 • Esma Balkir, Svetlana Kiritchenko, Isar Nejadgholi, Kathleen C. Fraser
In this paper, we briefly review trends in explainability and fairness in NLP research, identify the current practices in which explainability methods are applied to detect and mitigate bias, and investigate the barriers preventing XAI methods from being used more widely in tackling fairness issues.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +1
no code implementations • 25 May 2022 • Kathleen C. Fraser, Svetlana Kiritchenko, Esma Balkir
In an effort to guarantee that machine learning model outputs conform with human moral values, recent work has begun exploring the possibility of explicitly training models to learn the difference between right and wrong.
1 code implementation • NAACL 2022 • Esma Balkir, Isar Nejadgholi, Kathleen C. Fraser, Svetlana Kiritchenko
We present a novel feature attribution method for explaining text classifiers, and analyze it in the context of hate speech detection.
1 code implementation • ACL 2022 • Isar Nejadgholi, Kathleen C. Fraser, Svetlana Kiritchenko
Robustness of machine learning models on ever-changing real-world data is critical, especially for applications affecting human well-being such as content moderation.
no code implementations • 18 Oct 2021 • Kathleen C. Fraser, Majid Komeili
We then present an overview of the preliminary results from pilot studies on active and passive smart home speech sensing for the measurement of cognitive health, and conclude with some recommendations and challenge statements for the next wave of work in this area, to help overcome both technical and ethical barriers to success.
no code implementations • ACL 2021 • Kathleen C. Fraser, Isar Nejadgholi, Svetlana Kiritchenko
In this work, we present a computational approach to interpreting stereotypes in text through the Stereotype Content Model (SCM), a comprehensive causal theory from social psychology.
no code implementations • 22 Dec 2020 • Svetlana Kiritchenko, Isar Nejadgholi, Kathleen C. Fraser
The pervasiveness of abusive content on the internet can lead to severe psychological and physical harm.
no code implementations • WS 2020 • Isar Nejadgholi, Kathleen C. Fraser, Berry de Bruijn
When comparing entities extracted by a medical entity recognition system with gold standard annotations over a test set, two types of mismatches might occur, label mismatch or span mismatch.
no code implementations • WS 2019 • Isar Nejadgholi, Kathleen C. Fraser, Berry De Bruijn, Muqun Li, Astha LaPlante, Khaldoun Zine El Abidine
While producing a state-of-the-art result for the i2b2 2010 task (F1 = 0. 90), our results on MedMentions are significantly lower (F1 = 0. 63), suggesting there is still plenty of opportunity for improvement on this new data.
no code implementations • 3 Oct 2019 • Kathleen C. Fraser, Isar Nejadgholi, Berry de Bruijn, Muqun Li, Astha LaPlante, Khaldoun Zine El Abidine
While producing a state-of-the-art result for the i2b2 2010 task (F1 = 0. 90), our results on MedMentions are significantly lower (F1 = 0. 63), suggesting there is still plenty of opportunity for improvement on this new data.
no code implementations • NAACL 2019 • Kathleen C. Fraser, Nicklas Linz, Bai Li, Kristina Lundholm Fors, Frank Rudzicz, Alex K{\"o}nig, ra, Alex, Jan ersson, Philippe Robert, Dimitrios Kokkinakis
There is growing evidence that changes in speech and language may be early markers of dementia, but much of the previous NLP work in this area has been limited by the size of the available datasets.
no code implementations • WS 2019 • Kathleen C. Fraser, Frauke Zeller, David Harris Smith, Saif Mohammad, Frank Rudzicz
In 2014, a chatty but immobile robot called hitchBOT set out to hitchhike across Canada.
no code implementations • WS 2019 • Kathleen C. Fraser, Nicklas Linz, Hali Lindsay, Alex K{\"o}nig, ra
Increased access to large datasets has driven progress in NLP.
no code implementations • EMNLP 2017 • Kathleen C. Fraser, Kristina Lundholm Fors, Dimitrios Kokkinakis, Arto Nordlund
We present a machine learning analysis of eye-tracking data for the detection of mild cognitive impairment, a decline in cognitive abilities that is associated with an increased risk of developing dementia.