no code implementations • EACL 2023 • Pride Kavumba, Ana Brassard, Benjamin Heinzerling, Kentaro Inui
Explanation prompts ask language models to not only assign a particular label to a giveninput, such as true, entailment, or contradiction in the case of natural language inference but also to generate a free-text explanation that supports this label.
Ranked #1 on Natural Language Inference on ANLI test
no code implementations • ACL 2022 • Pride Kavumba, Ryo Takahashi, Yusuke Oda
However, models with a task-specific head require a lot of training data, making them susceptible to learning and exploiting dataset-specific superficial cues that do not generalize to other datasets.
1 code implementation • LREC 2022 • Ana Brassard, Benjamin Heinzerling, Pride Kavumba, Kentaro Inui
We present Semi-Structured Explanations for COPA (COPA-SSE), a new crowdsourced dataset of 9, 747 semi-structured, English common sense explanations for Choice of Plausible Alternatives (COPA) questions.
no code implementations • NAACL 2021 • Pride Kavumba, Benjamin Heinzerling, Ana Brassard, Kentaro Inui
Here, we propose to explicitly learn a model that does well on both the easy test set with superficial cues and hard test set without superficial cues.
no code implementations • WS 2019 • Keshav Singh, Paul Reisert, Naoya Inoue, Pride Kavumba, Kentaro Inui
Recognizing the implicit link between a claim and a piece of evidence (i. e. warrant) is the key to improving the performance of evidence detection.
no code implementations • WS 2019 • Pride Kavumba, Naoya Inoue, Benjamin Heinzerling, Keshav Singh, Paul Reisert, Kentaro Inui
Pretrained language models, such as BERT and RoBERTa, have shown large improvements in the commonsense reasoning benchmark COPA.