no code implementations • COLING 2022 • Jad Kabbara, Jackie Chi Kit Cheung
Presuppositions are assumptions that are taken for granted by an utterance, and identifying them is key to a pragmatic interpretation of language.
no code implementations • Findings (EMNLP) 2021 • Jad Kabbara, Jackie Chi Kit Cheung
Moreover, based on an automatic evaluation study, we provide evidence for our system’s ability to generate linguistic decisions that lead to improved extractive summaries.
1 code implementation • 26 Feb 2024 • Hang Jiang, Xiajie Zhang, Robert Mahari, Daniel Kessler, Eric Ma, Tal August, Irene Li, Alex 'Sandy' Pentland, Yoon Kim, Jad Kabbara, Deb Roy
Finally, we find that learning with stories shows a higher retention rate for non-native speakers in the follow-up assessment.
1 code implementation • 25 Oct 2023 • Shayne Longpre, Robert Mahari, Anthony Chen, Naana Obeng-Marnu, Damien Sileo, William Brannon, Niklas Muennighoff, Nathan Khazam, Jad Kabbara, Kartik Perisetla, Xinyi Wu, Enrico Shippole, Kurt Bollacker, Tongshuang Wu, Luis Villa, Sandy Pentland, Sara Hooker
The race to train language models on vast, diverse, and inconsistently documented datasets has raised pressing concerns about the legal and ethical risks for practitioners.
1 code implementation • 23 May 2023 • William Brannon, Suyash Fulay, Hang Jiang, Wonjune Kang, Brandon Roy, Jad Kabbara, Deb Roy
We propose ConGraT(Contrastive Graph-Text pretraining), a general, self-supervised method for jointly learning separate representations of texts and nodes in a parent (or ``supervening'') graph, where each text is associated with one of the nodes.
1 code implementation • 23 May 2023 • Robert Morabito, Jad Kabbara, Ali Emami
Debiasing methods that seek to mitigate the tendency of Language Models (LMs) to occasionally output toxic or inappropriate text have recently gained traction.
1 code implementation • 4 May 2023 • Hang Jiang, Xiajie Zhang, Xubo Cao, Cynthia Breazeal, Deb Roy, Jad Kabbara
Despite the many use cases for large language models (LLMs) in creating personalized chatbots, there has been limited research on evaluating the extent to which the behaviors of personalized LLMs accurately and consistently reflect specific personality traits.
no code implementations • NAACL 2019 • Jad Kabbara
Semantics and pragmatics are two complimentary and intertwined aspects of meaning in language.
no code implementations • ACL 2018 • Andre Cianflone, Yulan Feng, Jad Kabbara, Jackie Chi Kit Cheung
We introduce the novel task of predicting adverbial presupposition triggers, which is useful for natural language generation tasks such as summarization and dialogue systems.
no code implementations • 11 Jun 2018 • Andre Cianflone, Yulan Feng, Jad Kabbara, Jackie Chi Kit Cheung
We introduce the task of predicting adverbial presupposition triggers such as also and again.
no code implementations • COLING 2016 • Jad Kabbara, Yulan Feng, Jackie Chi Kit Cheung
We examine the potential of recurrent neural networks for handling pragmatic inferences involving complex contextual cues for the task of article usage prediction.