no code implementations • ACL (WOAH) 2021 • Lambert Mathias, Shaoliang Nie, Aida Mostafazadeh Davani, Douwe Kiela, Vinodkumar Prabhakaran, Bertie Vidgen, Zeerak Waseem
We present the results and main findings of the shared task at WOAH 5 on hateful memes detection.
1 code implementation • ACL 2022 • Rabeeh Karimi Mahabadi, Luke Zettlemoyer, James Henderson, Lambert Mathias, Marzieh Saeidi, Veselin Stoyanov, Majid Yazdani
Current methods for few-shot fine-tuning of pretrained masked language models (PLMs) require carefully engineered prompts and verbalizers for each new task to convert examples into a cloze-format that the PLM can score.
no code implementations • 30 Jun 2023 • Aaron Mueller, Kanika Narang, Lambert Mathias, Qifan Wang, Hamed Firooz
Meta-training allows one to leverage smaller models for few-shot generalization in a domain-general and task-agnostic manner; however, these methods alone results in models that may not have sufficient parameterization or knowledge to adapt quickly to a large variety of tasks.
1 code implementation • 1 Jun 2023 • Wang-Chiew Tan, Jane Dwivedi-Yu, Yuliang Li, Lambert Mathias, Marzieh Saeidi, Jing Nathan Yan, Alon Y. Halevy
We describe a set of experiments on TimelineQA with several state-of-the-art QA models.
no code implementations • 25 May 2022 • Suzanna Sia, Anton Belyy, Amjad Almahairi, Madian Khabsa, Luke Zettlemoyer, Lambert Mathias
Evaluating an explanation's faithfulness is desired for many reasons such as trust, interpretability and diagnosing the sources of model's errors.
no code implementations • 25 May 2022 • Badr AlKhamissi, Faisal Ladhak, Srini Iyer, Ves Stoyanov, Zornitsa Kozareva, Xian Li, Pascale Fung, Lambert Mathias, Asli Celikyilmaz, Mona Diab
Hate speech detection is complex; it relies on commonsense reasoning, knowledge of stereotypes, and an understanding of social nuance that differs from one culture to the next.
Cultural Vocal Bursts Intensity Prediction Few-Shot Learning +1
no code implementations • 24 May 2022 • Neema Kotonya, Andreas Vlachos, Majid Yazdani, Lambert Mathias, Marzieh Saeidi
In this work, we learn how to infer expression trees automatically from policy texts.
2 code implementations • 3 Apr 2022 • Rabeeh Karimi Mahabadi, Luke Zettlemoyer, James Henderson, Marzieh Saeidi, Lambert Mathias, Veselin Stoyanov, Majid Yazdani
Current methods for few-shot fine-tuning of pretrained masked language models (PLMs) require carefully engineered prompts and verbalizers for each new task to convert examples into a cloze-format that the PLM can score.
1 code implementation • BigScience (ACL) 2022 • Aaron Chan, Maziar Sanjabi, Lambert Mathias, Liang Tan, Shaoliang Nie, Xiaochang Peng, Xiang Ren, Hamed Firooz
An extractive rationale explains a language model's (LM's) prediction on a given task instance by highlighting the text inputs that most influenced the prediction.
1 code implementation • ACL 2022 • Yuning Mao, Lambert Mathias, Rui Hou, Amjad Almahairi, Hao Ma, Jiawei Han, Wen-tau Yih, Madian Khabsa
Recent parameter-efficient language model tuning (PELT) methods manage to match the performance of fine-tuning with much fewer trainable parameters and perform especially well when training data is limited.
no code implementations • 9 Nov 2020 • Alireza Roshan-Ghias, Clint Solomon Mathialagan, Pragaash Ponnusamy, Lambert Mathias, Chenlei Guo
Spoken language understanding (SLU) systems in conversational AI agents often experience errors in the form of misrecognitions by automatic speech recognition (ASR) or semantic gaps in natural language understanding (NLU).
Automatic Speech Recognition Automatic Speech Recognition (ASR) +5
no code implementations • 13 Feb 2020 • Zheng Chen, Xing Fan, Yuan Ling, Lambert Mathias, Chenlei Guo
Then, inspired by the wide success of pre-trained contextual language embeddings, and also as a way to compensate for insufficient QR training data, we propose a language-modeling (LM) based approach to pre-train query embeddings on historical user conversation data with a voice assistant.
no code implementations • 26 Aug 2019 • Adrian de Wynter, Lambert Mathias
This network projects the slot into an attribute space derived from the KB, and, by leveraging similarities in this space, we propose candidate slot keys and values to the dialogue state tracker.
no code implementations • WS 2019 • Rylan Conway, Lambert Mathias
Much of the previous work has relied on modeling the natural order of the conversation, using distance based offsets as an approximation of time.
Ranked #6 on Video Salient Object Detection on SegTrack v2 (using extra training data)
no code implementations • WS 2019 • Tongfei Chen, Chetan Naik, Hua He, Pushpendre Rastogi, Lambert Mathias
One such approach for tracking the dialogue state is slot carryover, where a model makes a binary decision if a slot from the context is relevant to the current turn.
1 code implementation • 28 Mar 2019 • Michael Regan, Pushpendre Rastogi, Arpit Gupta, Lambert Mathias
In this paper, we describe our methodology for creating the query reformulation extension to the dialog corpus, and present an initial set of experiments to establish a baseline for the CQR task.
no code implementations • NAACL 2019 • Pushpendre Rastogi, Arpit Gupta, Tongfei Chen, Lambert Mathias
We present a novel approach to dialogue state tracking and referring expression resolution tasks.
Dialogue State Tracking Multi-domain Dialogue State Tracking +3
no code implementations • 27 Nov 2018 • Amr Sharaf, Arpit Gupta, Hancheng Ge, Chetan Naik, Lambert Mathias
In the cross-lingual setup, we assume there is access to annotated resources as well as a well trained model in the source language and little to no annotated data in the target language.
no code implementations • 5 Jun 2018 • Chetan Naik, Arpit Gupta, Hancheng Ge, Lambert Mathias, Ruhi Sarikaya
In the slot-filling paradigm, where a user can refer back to slots in the context during a conversation, the goal of the contextual understanding system is to resolve the referring expressions to the appropriate slots in the context.
no code implementations • NAACL 2018 • Thomas Kollar, Danielle Berry, Lauren Stuart, Karolina Owczarzak, Tagyoung Chung, Lambert Mathias, Michael Kayser, Bradford Snow, Spyros Matsoukas
This paper introduces a meaning representation for spoken language understanding.
no code implementations • WS 2017 • Xing Fan, Emilio Monti, Lambert Mathias, Markus Dreyer
The goal of semantic parsing is to map natural language to a machine interpretable meaning representation language (MRL).