Search Results for author: Joe Stacey

Found 5 papers, 3 papers with code

Improving Robustness in Knowledge Distillation Using Domain-Targeted Data Augmentation

no code implementations22 May 2023 Joe Stacey, Marek Rei

DMU is complementary to the domain-targeted augmentation, and substantially improves performance on SNLI-hard.

Data Augmentation Knowledge Distillation +2

Logical Reasoning for Natural Language Inference Using Generated Facts as Atoms

no code implementations22 May 2023 Joe Stacey, Pasquale Minervini, Haim Dubossarsky, Oana-Maria Camburu, Marek Rei

We apply our method to the highly challenging ANLI dataset, where our framework improves the performance of both a DeBERTa-base and BERT baseline.

Logical Reasoning Natural Language Inference +1

Supervising Model Attention with Human Explanations for Robust Natural Language Inference

1 code implementation16 Apr 2021 Joe Stacey, Yonatan Belinkov, Marek Rei

Natural Language Inference (NLI) models are known to learn from biases and artefacts within their training data, impacting how well they generalise to other unseen datasets.

Natural Language Inference

Avoiding the Hypothesis-Only Bias in Natural Language Inference via Ensemble Adversarial Training

1 code implementation EMNLP 2020 Joe Stacey, Pasquale Minervini, Haim Dubossarsky, Sebastian Riedel, Tim Rocktäschel

Natural Language Inference (NLI) datasets contain annotation artefacts resulting in spurious correlations between the natural language utterances and their respective entailment classes.

Natural Language Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.