Search Results for author: Pride Kavumba

Found 6 papers, 1 papers with code

Prompting for explanations improves Adversarial NLI. Is this true? {Yes} it is {true} because {it weakens superficial cues}

no code implementations EACL 2023 Pride Kavumba, Ana Brassard, Benjamin Heinzerling, Kentaro Inui

Explanation prompts ask language models to not only assign a particular label to a giveninput, such as true, entailment, or contradiction in the case of natural language inference but also to generate a free-text explanation that supports this label.

Natural Language Inference

Are Prompt-based Models Clueless?

no code implementations ACL 2022 Pride Kavumba, Ryo Takahashi, Yusuke Oda

However, models with a task-specific head require a lot of training data, making them susceptible to learning and exploiting dataset-specific superficial cues that do not generalize to other datasets.

Language Modelling Natural Language Understanding

COPA-SSE: Semi-structured Explanations for Commonsense Reasoning

1 code implementation LREC 2022 Ana Brassard, Benjamin Heinzerling, Pride Kavumba, Kentaro Inui

We present Semi-Structured Explanations for COPA (COPA-SSE), a new crowdsourced dataset of 9, 747 semi-structured, English common sense explanations for Choice of Plausible Alternatives (COPA) questions.

Common Sense Reasoning Knowledge Graphs

Learning to Learn to be Right for the Right Reasons

no code implementations NAACL 2021 Pride Kavumba, Benjamin Heinzerling, Ana Brassard, Kentaro Inui

Here, we propose to explicitly learn a model that does well on both the easy test set with superficial cues and hard test set without superficial cues.

Meta-Learning

Improving Evidence Detection by Leveraging Warrants

no code implementations WS 2019 Keshav Singh, Paul Reisert, Naoya Inoue, Pride Kavumba, Kentaro Inui

Recognizing the implicit link between a claim and a piece of evidence (i. e. warrant) is the key to improving the performance of evidence detection.

Cannot find the paper you are looking for? You can Submit a new open access paper.