1 code implementation • 1 Mar 2024 • Joe Stacey, Jianpeng Cheng, John Torr, Tristan Guigue, Joris Driesen, Alexandru Coca, Mark Gaynor, Anders Johannsen
Spurred by recent advances in Large Language Models (LLMs), virtual assistants are poised to take a leap forward in terms of their dialogue capabilities.
no code implementations • 22 May 2023 • Joe Stacey, Marek Rei
Knowledge distillation optimises a smaller student model to behave similarly to a larger teacher model, retaining some of the performance benefits.
1 code implementation • 22 May 2023 • Joe Stacey, Pasquale Minervini, Haim Dubossarsky, Oana-Maria Camburu, Marek Rei
With recent advances, neural models can achieve human-level performance on various natural language tasks.
1 code implementation • 23 May 2022 • Joe Stacey, Pasquale Minervini, Haim Dubossarsky, Marek Rei
We can further improve model performance and span-level decisions by using the e-SNLI explanations during training.
1 code implementation • 16 Apr 2021 • Joe Stacey, Yonatan Belinkov, Marek Rei
Natural Language Inference (NLI) models are known to learn from biases and artefacts within their training data, impacting how well they generalise to other unseen datasets.
1 code implementation • EMNLP 2020 • Joe Stacey, Pasquale Minervini, Haim Dubossarsky, Sebastian Riedel, Tim Rocktäschel
Natural Language Inference (NLI) datasets contain annotation artefacts resulting in spurious correlations between the natural language utterances and their respective entailment classes.