Search Results for author: Joe Stacey

Found 2 papers, 2 papers with code

Supervising Model Attention with Human Explanations for Robust Natural Language Inference

1 code implementation16 Apr 2021 Joe Stacey, Yonatan Belinkov, Marek Rei

Natural Language Inference (NLI) models are known to learn from biases and artefacts within their training data, impacting how well they generalise to other unseen datasets.

Natural Language Inference

Avoiding the Hypothesis-Only Bias in Natural Language Inference via Ensemble Adversarial Training

1 code implementation EMNLP 2020 Joe Stacey, Pasquale Minervini, Haim Dubossarsky, Sebastian Riedel, Tim Rocktäschel

Natural Language Inference (NLI) datasets contain annotation artefacts resulting in spurious correlations between the natural language utterances and their respective entailment classes.

Natural Language Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.