Search Results for author: Lambert Mathias

Found 21 papers, 6 papers with code

Prompt-free and Efficient Few-shot Learning with Language Models

1 code implementation ACL 2022 Rabeeh Karimi Mahabadi, Luke Zettlemoyer, James Henderson, Lambert Mathias, Marzieh Saeidi, Veselin Stoyanov, Majid Yazdani

Current methods for few-shot fine-tuning of pretrained masked language models (PLMs) require carefully engineered prompts and verbalizers for each new task to convert examples into a cloze-format that the PLM can score.

Few-Shot Learning

Meta-training with Demonstration Retrieval for Efficient Few-shot Learning

no code implementations30 Jun 2023 Aaron Mueller, Kanika Narang, Lambert Mathias, Qifan Wang, Hamed Firooz

Meta-training allows one to leverage smaller models for few-shot generalization in a domain-general and task-agnostic manner; however, these methods alone results in models that may not have sufficient parameterization or knowledge to adapt quickly to a large variety of tasks.

Few-Shot Learning QNLI +3

Logical Satisfiability of Counterfactuals for Faithful Explanations in NLI

no code implementations25 May 2022 Suzanna Sia, Anton Belyy, Amjad Almahairi, Madian Khabsa, Luke Zettlemoyer, Lambert Mathias

Evaluating an explanation's faithfulness is desired for many reasons such as trust, interpretability and diagnosing the sources of model's errors.

counterfactual

ToKen: Task Decomposition and Knowledge Infusion for Few-Shot Hate Speech Detection

no code implementations25 May 2022 Badr AlKhamissi, Faisal Ladhak, Srini Iyer, Ves Stoyanov, Zornitsa Kozareva, Xian Li, Pascale Fung, Lambert Mathias, Asli Celikyilmaz, Mona Diab

Hate speech detection is complex; it relies on commonsense reasoning, knowledge of stereotypes, and an understanding of social nuance that differs from one culture to the next.

Cultural Vocal Bursts Intensity Prediction Few-Shot Learning +1

PERFECT: Prompt-free and Efficient Few-shot Learning with Language Models

1 code implementation3 Apr 2022 Rabeeh Karimi Mahabadi, Luke Zettlemoyer, James Henderson, Marzieh Saeidi, Lambert Mathias, Veselin Stoyanov, Majid Yazdani

Current methods for few-shot fine-tuning of pretrained masked language models (PLMs) require carefully engineered prompts and verbalizers for each new task to convert examples into a cloze-format that the PLM can score.

Few-Shot Learning

UNIREX: A Unified Learning Framework for Language Model Rationale Extraction

1 code implementation BigScience (ACL) 2022 Aaron Chan, Maziar Sanjabi, Lambert Mathias, Liang Tan, Shaoliang Nie, Xiaochang Peng, Xiang Ren, Hamed Firooz

An extractive rationale explains a language model's (LM's) prediction on a given task instance by highlighting the text inputs that most influenced the prediction.

Language Modelling text-classification +1

UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning

1 code implementation ACL 2022 Yuning Mao, Lambert Mathias, Rui Hou, Amjad Almahairi, Hao Ma, Jiawei Han, Wen-tau Yih, Madian Khabsa

Recent parameter-efficient language model tuning (PELT) methods manage to match the performance of fine-tuning with much fewer trainable parameters and perform especially well when training data is limited.

Language Modelling Model Selection

Personalized Query Rewriting in Conversational AI Agents

no code implementations9 Nov 2020 Alireza Roshan-Ghias, Clint Solomon Mathialagan, Pragaash Ponnusamy, Lambert Mathias, Chenlei Guo

Spoken language understanding (SLU) systems in conversational AI agents often experience errors in the form of misrecognitions by automatic speech recognition (ASR) or semantic gaps in natural language understanding (NLU).

Automatic Speech Recognition Automatic Speech Recognition (ASR) +4

Pre-Training for Query Rewriting in A Spoken Language Understanding System

no code implementations13 Feb 2020 Zheng Chen, Xing Fan, Yuan Ling, Lambert Mathias, Chenlei Guo

Then, inspired by the wide success of pre-trained contextual language embeddings, and also as a way to compensate for insufficient QR training data, we propose a language-modeling (LM) based approach to pre-train query embeddings on historical user conversation data with a voice assistant.

Entity Resolution Friction +5

Leveraging External Knowledge for Out-Of-Vocabulary Entity Labeling

no code implementations26 Aug 2019 Adrian de Wynter, Lambert Mathias

This network projects the slot into an attribute space derived from the KB, and, by leveraging similarities in this space, we propose candidate slot keys and values to the dialogue state tracker.

Attribute Dialogue State Tracking +1

Time Masking: Leveraging Temporal Information in Spoken Dialogue Systems

no code implementations WS 2019 Rylan Conway, Lambert Mathias

Much of the previous work has relied on modeling the natural order of the conversation, using distance based offsets as an approximation of time.

Ranked #6 on Video Salient Object Detection on SegTrack v2 (using extra training data)

Spoken Dialogue Systems Video Salient Object Detection

Improving Long Distance Slot Carryover in Spoken Dialogue Systems

no code implementations WS 2019 Tongfei Chen, Chetan Naik, Hua He, Pushpendre Rastogi, Lambert Mathias

One such approach for tracking the dialogue state is slot carryover, where a model makes a binary decision if a slot from the context is relevant to the current turn.

Spoken Dialogue Systems

A dataset for resolving referring expressions in spoken dialogue via contextual query rewrites (CQR)

1 code implementation28 Mar 2019 Michael Regan, Pushpendre Rastogi, Arpit Gupta, Lambert Mathias

In this paper, we describe our methodology for creating the query reformulation extension to the dialog corpus, and present an initial set of experiments to establish a baseline for the CQR task.

Spoken Dialogue Systems Spoken Language Understanding

Cross-Lingual Approaches to Reference Resolution in Dialogue Systems

no code implementations27 Nov 2018 Amr Sharaf, Arpit Gupta, Hancheng Ge, Chetan Naik, Lambert Mathias

In the cross-lingual setup, we assume there is access to annotated resources as well as a well trained model in the source language and little to no annotated data in the target language.

Cross-Lingual Transfer Data Augmentation +4

Contextual Slot Carryover for Disparate Schemas

no code implementations5 Jun 2018 Chetan Naik, Arpit Gupta, Hancheng Ge, Lambert Mathias, Ruhi Sarikaya

In the slot-filling paradigm, where a user can refer back to slots in the context during a conversation, the goal of the contextual understanding system is to resolve the referring expressions to the appropriate slots in the context.

slot-filling Slot Filling

Transfer Learning for Neural Semantic Parsing

no code implementations WS 2017 Xing Fan, Emilio Monti, Lambert Mathias, Markus Dreyer

The goal of semantic parsing is to map natural language to a machine interpretable meaning representation language (MRL).

Semantic Parsing Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.