no code implementations • 22 May 2023 • Joe Stacey, Marek Rei
DMU is complementary to the domain-targeted augmentation, and substantially improves performance on SNLI-hard.
no code implementations • 22 May 2023 • Joe Stacey, Pasquale Minervini, Haim Dubossarsky, Oana-Maria Camburu, Marek Rei
We apply our method to the highly challenging ANLI dataset, where our framework improves the performance of both a DeBERTa-base and BERT baseline.
1 code implementation • 23 May 2022 • Joe Stacey, Pasquale Minervini, Haim Dubossarsky, Marek Rei
We can further improve model performance and span-level decisions by using the e-SNLI explanations during training.
1 code implementation • 16 Apr 2021 • Joe Stacey, Yonatan Belinkov, Marek Rei
Natural Language Inference (NLI) models are known to learn from biases and artefacts within their training data, impacting how well they generalise to other unseen datasets.
1 code implementation • EMNLP 2020 • Joe Stacey, Pasquale Minervini, Haim Dubossarsky, Sebastian Riedel, Tim Rocktäschel
Natural Language Inference (NLI) datasets contain annotation artefacts resulting in spurious correlations between the natural language utterances and their respective entailment classes.