1 code implementation • NAACL 2022 • Prasetya Ajie Utama, Joshua Bambrick, Nafise Sadat Moosavi, Iryna Gurevych
In this work, we show that NLI models can be effective for this task when the training data is augmented with high-quality task-oriented examples.
Abstractive Text Summarization
Natural Language Inference
+1
1 code implementation • EMNLP 2021 • Prasetya Ajie Utama, Nafise Sadat Moosavi, Victor Sanh, Iryna Gurevych
Recent prompt-based approaches allow pretrained language models to achieve strong performances on few-shot finetuning by reformulating downstream tasks as a language modeling problem.
no code implementations • 23 Oct 2020 • Nafise Sadat Moosavi, Marcel de Boer, Prasetya Ajie Utama, Iryna Gurevych
Existing approaches to improve robustness against dataset biases mostly focus on changing the training objective so that models learn less from biased examples.
1 code implementation • EMNLP 2020 • Prasetya Ajie Utama, Nafise Sadat Moosavi, Iryna Gurevych
Recently proposed debiasing methods are shown to be effective in mitigating this tendency.
1 code implementation • ACL 2020 • Prasetya Ajie Utama, Nafise Sadat Moosavi, Iryna Gurevych
Models for natural language understanding (NLU) tasks often rely on the idiosyncratic biases of the dataset, which make them brittle against test cases outside the training distribution.
no code implementations • 19 Sep 2019 • Nafise Sadat Moosavi, Prasetya Ajie Utama, Andreas Rücklé, Iryna Gurevych
Finally, we show that using the coverage information is not only beneficial for improving the performance across different datasets of the same task.
no code implementations • ACL 2019 • Tobias Falke, Leonardo F. R. Ribeiro, Prasetya Ajie Utama, Ido Dagan, Iryna Gurevych
While recent progress on abstractive summarization has led to remarkably fluent summaries, factual errors in generated summaries still severely limit their use in practice.