Search Results for author: Joe Stacey

Found 6 papers, 4 papers with code

LUCID: LLM-Generated Utterances for Complex and Interesting Dialogues

1 code implementation1 Mar 2024 Joe Stacey, Jianpeng Cheng, John Torr, Tristan Guigue, Joris Driesen, Alexandru Coca, Mark Gaynor, Anders Johannsen

Moreover, creating high quality dialogue data has until now required considerable human input, limiting both the scale of these datasets and the ability to rapidly bootstrap data for a new target domain.

Logical Reasoning for Natural Language Inference Using Generated Facts as Atoms

no code implementations22 May 2023 Joe Stacey, Pasquale Minervini, Haim Dubossarsky, Oana-Maria Camburu, Marek Rei

We apply our method to the highly challenging ANLI dataset, where our framework improves the performance of both a DeBERTa-base and BERT baseline.

Logical Reasoning Natural Language Inference +1

Improving Robustness in Knowledge Distillation Using Domain-Targeted Data Augmentation

no code implementations22 May 2023 Joe Stacey, Marek Rei

DMU is complementary to the domain-targeted augmentation, and substantially improves performance on SNLI-hard.

Data Augmentation Knowledge Distillation +2

Supervising Model Attention with Human Explanations for Robust Natural Language Inference

1 code implementation16 Apr 2021 Joe Stacey, Yonatan Belinkov, Marek Rei

Natural Language Inference (NLI) models are known to learn from biases and artefacts within their training data, impacting how well they generalise to other unseen datasets.

Natural Language Inference

Avoiding the Hypothesis-Only Bias in Natural Language Inference via Ensemble Adversarial Training

1 code implementation EMNLP 2020 Joe Stacey, Pasquale Minervini, Haim Dubossarsky, Sebastian Riedel, Tim Rocktäschel

Natural Language Inference (NLI) datasets contain annotation artefacts resulting in spurious correlations between the natural language utterances and their respective entailment classes.

Natural Language Inference Sentence

Cannot find the paper you are looking for? You can Submit a new open access paper.