no code implementations • EACL (HCINLP) 2021 • Alyssa Lees, Daniel Borkan, Ian Kivlichan, Jorge Nario, Tesh Goyal
We study the task of labeling covert or veiled toxicity in online conversations.
no code implementations • SemEval (NAACL) 2022 • Elisabetta Fersini, Francesca Gasparini, Giulia Rizzi, Aurora Saibene, Berta Chulvi, Paolo Rosso, Alyssa Lees, Jeffrey Sorensen
The paper describes the SemEval-2022 Task 5: Multimedia Automatic Misogyny Identification (MAMI), which explores the detection of misogynous memes on the web by taking advantage of available texts and images.
no code implementations • NAACL (WOAH) 2022 • Alyssa Chvasta, Alyssa Lees, Jeffrey Sorensen, Lucy Vasserman, Nitesh Goyal
In an era of increasingly large pre-trained language models, knowledge distillation is a powerful tool for transferring information from a large model to a smaller one.
no code implementations • 22 Feb 2022 • Alyssa Lees, Vinh Q. Tran, Yi Tay, Jeffrey Sorensen, Jai Gupta, Donald Metzler, Lucy Vasserman
As such, it is crucial to develop models that are effective across a diverse range of languages, usages, and styles.
1 code implementation • EMNLP 2021 • Xiang Deng, Yu Su, Alyssa Lees, You Wu, Cong Yu, Huan Sun
We present ReasonBert, a pre-training method that augments language models with the ability to reason over long-range relations and multiple, possibly hybrid contexts.
Ranked #1 on Semantic Parsing on GraphQuestions
no code implementations • COLING 2020 • Alyssa Lees, Chris Welty, Shubin Zhao, Jacek Korycki, Sara Mc Carthy
A common step in developing an understanding of a vertical domain, e. g. shopping, dining, movies, medicine, etc., is curating a taxonomy of categories specific to the domain.
1 code implementation • 26 Jun 2020 • Xiang Deng, Huan Sun, Alyssa Lees, You Wu, Cong Yu
In this paper, we present TURL, a novel framework that introduces the pre-training/fine-tuning paradigm to relational Web tables.
Ranked #1 on Column Type Annotation on WikipediaGS-CTA
no code implementations • 30 Oct 2019 • Ananth Balashankar, Alyssa Lees, Chris Welty, Lakshminarayanan Subramanian
The potential for learned models to amplify existing societal biases has been broadly recognized.
no code implementations • 24 Oct 2019 • Ananth Balashankar, Alyssa Lees
We demonstrate that for a classifier to approach a definition of fairness in terms of specific sensitive variables, adequate subgroup population samples need to exist and the model dimensionality has to be aligned with subgroup population distributions.