no code implementations • 7 Nov 2022 • Luca Pinchetti, Tommaso Salvatori, Yordan Yordanov, Beren Millidge, Yuhang Song, Thomas Lukasiewicz
A large amount of recent research has the far-reaching goal of finding training methods for deep neural networks that can serve as alternatives to backpropagation (BP).
1 code implementation • 8 Oct 2022 • Lei Sha, Yuhang Song, Yordan Yordanov, Tommaso Salvatori, Thomas Lukasiewicz
Transformers have become an indispensable module for text generation models since their great success in machine translation.
1 code implementation • 12 Dec 2021 • Yordan Yordanov, Vid Kocijan, Thomas Lukasiewicz, Oana-Maria Camburu
A potential solution is the few-shot out-of-domain transfer of NLEs from a parent task with many NLEs to a child task.
1 code implementation • EMNLP 2020 • Yordan Yordanov, Oana-Maria Camburu, Vid Kocijan, Thomas Lukasiewicz
Overall, four categories of training and evaluation objectives have been introduced.
1 code implementation • IJCNLP 2019 • Vid Kocijan, Oana-Maria Camburu, Ana-Maria Cretu, Yordan Yordanov, Phil Blunsom, Thomas Lukasiewicz
We use a language-model-based approach for pronoun resolution in combination with our WikiCREM dataset.
no code implementations • ACL 2019 • Vid Kocijan, Ana-Maria Cretu, Oana-Maria Camburu, Yordan Yordanov, Thomas Lukasiewicz
The Winograd Schema Challenge (WSC) dataset WSC273 and its inference counterpart WNLI are popular benchmarks for natural language understanding and commonsense reasoning.
2 code implementations • 15 May 2019 • Vid Kocijan, Ana-Maria Cretu, Oana-Maria Camburu, Yordan Yordanov, Thomas Lukasiewicz
The Winograd Schema Challenge (WSC) dataset WSC273 and its inference counterpart WNLI are popular benchmarks for natural language understanding and commonsense reasoning.