2 code implementations • 5 Apr 2024 • Fred Philippy, Shohreh Haddadan, Siwen Guo
A common method for ZSC is to fine-tune a language model on a Natural Language Inference (NLI) dataset and then use it to infer the entailment between the input document and the target labels.
1 code implementation • 6 Feb 2024 • Fred Philippy, Siwen Guo, Shohreh Haddadan, Cedric Lothritz, Jacques Klein, Tegawendé F. Bissyandé
Soft Prompt Tuning (SPT) is a parameter-efficient method for adapting pre-trained language models (PLMs) to specific tasks by inserting learnable embeddings, or soft prompts, at the input layer of the PLM, without modifying its parameters.
no code implementations • 26 May 2023 • Fred Philippy, Siwen Guo, Shohreh Haddadan
To enhance the structure of this review and to facilitate consolidation with future studies, we identify five categories of such factors.
1 code implementation • 3 May 2023 • Fred Philippy, Siwen Guo, Shohreh Haddadan
Prior research has investigated the impact of various linguistic features on cross-lingual transfer performance.
2 code implementations • ACL 2019 • Shohreh Haddadan, Elena Cabrio, Serena Villata
We address this task in an empirical manner by annotating 39 political debates from the last 50 years of US presidential campaigns, creating a new corpus of 29k argument components, labeled as premises and claims.