1 code implementation • COLING (CRAC) 2022 • Haixia Chai, Nafise Sadat Moosavi, Iryna Gurevych, Michael Strube
The results of our extrinsic evaluation show that while there is a significant difference between the performance of the rule-based system vs. state-of-the-art neural model on coreference resolution datasets, we do not observe a considerable difference on their impact on downstream models.
no code implementations • LREC 2022 • Juntao Yu, Sopan Khosla, Nafise Sadat Moosavi, Silviu Paun, Sameer Pradhan, Massimo Poesio
It also supports the evaluation of split antecedent anaphora and discourse deixis, for which no tools existed.
no code implementations • 27 May 2023 • Jasivan Alex Sivakumar, Nafise Sadat Moosavi
Apart from providing a comprehensive evaluation of models on different numerical reasoning aspects, FERMAT enables a systematic and automated generation of an arbitrarily large training or evaluation set for each aspect. The datasets and codes are publicly available to generate further multi-view data for ulterior tasks and languages.
1 code implementation • 25 Apr 2023 • Jan-Christoph Klie, Ji-Ung Lee, Kevin Stowe, Gözde Gül Şahin, Nafise Sadat Moosavi, Luke Bates, Dominic Petrak, Richard Eckart de Castilho, Iryna Gurevych
Citizen Science is an alternative to crowdsourcing that is relatively unexplored in the context of NLP.
1 code implementation • COLING 2022 • Doan Nam Long Vu, Nafise Sadat Moosavi, Steffen Eger
The evaluation of recent embedding-based evaluation metrics for text generation is primarily based on measuring their correlation with human evaluations on standard benchmarks.
2 code implementations • 30 Aug 2022 • Haishuo Fang, Ji-Ung Lee, Nafise Sadat Moosavi, Iryna Gurevych
In contrast to conventional, predefined activation functions, RAFs can adaptively learn optimal activation functions during training according to input data.
1 code implementation • 24 May 2022 • Silviu Paun, Juntao Yu, Nafise Sadat Moosavi, Massimo Poesio
Anaphoric reference is an aspect of language interpretation covering a variety of types of interpretation beyond the simple case of identity reference to entities introduced via nominal expressions covered by the traditional coreference task in its most recent incarnation in ONTONOTES and similar datasets.
no code implementations • 13 May 2022 • Dominic Petrak, Nafise Sadat Moosavi, Iryna Gurevych
We evaluate our approach on three different tasks that require numerical reasoning, including (a) reading comprehension in the DROP dataset, (b) inference-on-tables in the InfoTabs dataset, and (c) table-to-text generation in WikiBio and SciGen datasets.
1 code implementation • NAACL 2022 • Prasetya Ajie Utama, Joshua Bambrick, Nafise Sadat Moosavi, Iryna Gurevych
In this work, we show that NLI models can be effective for this task when the training data is augmented with high-quality task-oriented examples.
Abstractive Text Summarization
Natural Language Inference
+1
1 code implementation • NAACL 2022 • Nafise Sadat Moosavi, Quentin Delfosse, Kristian Kersting, Iryna Gurevych
The resulting adapters (a) contain about 50% of the learning parameters of the standard adapter and are therefore more efficient at training and inference, and require less storage space, and (b) achieve considerably higher performances in low-data settings.
2 code implementations • 6 Dec 2021 • Kaustubh D. Dhole, Varun Gangal, Sebastian Gehrmann, Aadesh Gupta, Zhenhao Li, Saad Mahamood, Abinaya Mahendiran, Simon Mille, Ashish Shrivastava, Samson Tan, Tongshuang Wu, Jascha Sohl-Dickstein, Jinho D. Choi, Eduard Hovy, Ondrej Dusek, Sebastian Ruder, Sajant Anand, Nagender Aneja, Rabin Banjade, Lisa Barthe, Hanna Behnke, Ian Berlot-Attwell, Connor Boyle, Caroline Brun, Marco Antonio Sobrevilla Cabezudo, Samuel Cahyawijaya, Emile Chapuis, Wanxiang Che, Mukund Choudhary, Christian Clauss, Pierre Colombo, Filip Cornell, Gautier Dagan, Mayukh Das, Tanay Dixit, Thomas Dopierre, Paul-Alexis Dray, Suchitra Dubey, Tatiana Ekeinhor, Marco Di Giovanni, Tanya Goyal, Rishabh Gupta, Louanes Hamla, Sang Han, Fabrice Harel-Canada, Antoine Honore, Ishan Jindal, Przemyslaw K. Joniak, Denis Kleyko, Venelin Kovatchev, Kalpesh Krishna, Ashutosh Kumar, Stefan Langer, Seungjae Ryan Lee, Corey James Levinson, Hualou Liang, Kaizhao Liang, Zhexiong Liu, Andrey Lukyanenko, Vukosi Marivate, Gerard de Melo, Simon Meoni, Maxime Meyer, Afnan Mir, Nafise Sadat Moosavi, Niklas Muennighoff, Timothy Sum Hon Mun, Kenton Murray, Marcin Namysl, Maria Obedkova, Priti Oli, Nivranshu Pasricha, Jan Pfister, Richard Plant, Vinay Prabhu, Vasile Pais, Libo Qin, Shahab Raji, Pawan Kumar Rajpoot, Vikas Raunak, Roy Rinberg, Nicolas Roberts, Juan Diego Rodriguez, Claude Roux, Vasconcellos P. H. S., Ananya B. Sai, Robin M. Schmidt, Thomas Scialom, Tshephisho Sefara, Saqib N. Shamsi, Xudong Shen, Haoyue Shi, Yiwen Shi, Anna Shvets, Nick Siegel, Damien Sileo, Jamie Simon, Chandan Singh, Roman Sitelew, Priyank Soni, Taylor Sorensen, William Soto, Aman Srivastava, KV Aditya Srivatsa, Tony Sun, Mukund Varma T, A Tabassum, Fiona Anting Tan, Ryan Teehan, Mo Tiwari, Marie Tolkiehn, Athena Wang, Zijian Wang, Gloria Wang, Zijie J. Wang, Fuxuan Wei, Bryan Wilie, Genta Indra Winata, Xinyi Wu, Witold Wydmański, Tianbao Xie, Usama Yaseen, Michael A. Yee, Jing Zhang, Yue Zhang
Data augmentation is an important component in the robustness evaluation of models in natural language processing (NLP) and in enhancing the diversity of the data they are trained on.
1 code implementation • EMNLP 2021 • Prasetya Ajie Utama, Nafise Sadat Moosavi, Victor Sanh, Iryna Gurevych
Recent prompt-based approaches allow pretrained language models to achieve strong performances on few-shot finetuning by reformulating downstream tasks as a language modeling problem.
1 code implementation • 16 Apr 2021 • Nafise Sadat Moosavi, Andreas Rücklé, Dan Roth, Iryna Gurevych
In this paper, we introduce SciGen, a new challenge dataset for the task of reasoning-aware data-to-text generation consisting of tables from scientific articles and their corresponding descriptions.
1 code implementation • NAACL 2021 • Juntao Yu, Nafise Sadat Moosavi, Silviu Paun, Massimo Poesio
Split-antecedent anaphora is rarer and more complex to resolve than single-antecedent anaphora; as a result, it is not annotated in many datasets designed to test coreference, and previous work on resolving this type of anaphora was carried out in unrealistic conditions that assume gold mentions and/or gold split-antecedent anaphors are available.
1 code implementation • ACL 2021 • Mingzhu Wu, Nafise Sadat Moosavi, Dan Roth, Iryna Gurevych
We propose a methodology for creating MRC datasets that better reflect the challenges of coreference reasoning and use it to create a sample evaluation set.
1 code implementation • COLING 2020 • Juntao Yu, Nafise Sadat Moosavi, Silviu Paun, Massimo Poesio
One limitation of virtually all coreference resolution models is the focus on single-antecedent anaphors.
no code implementations • 23 Oct 2020 • Nafise Sadat Moosavi, Marcel de Boer, Prasetya Ajie Utama, Iryna Gurevych
Existing approaches to improve robustness against dataset biases mostly focus on changing the training objective so that models learn less from biased examples.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Mingzhu Wu, Nafise Sadat Moosavi, Andreas Rücklé, Iryna Gurevych
Our framework weights each example based on the biases it contains and the strength of those biases in the training data.
1 code implementation • EMNLP 2020 • Prasetya Ajie Utama, Nafise Sadat Moosavi, Iryna Gurevych
Recently proposed debiasing methods are shown to be effective in mitigating this tendency.
1 code implementation • ACL 2020 • Prasetya Ajie Utama, Nafise Sadat Moosavi, Iryna Gurevych
Models for natural language understanding (NLU) tasks often rely on the idiosyncratic biases of the dataset, which make them brittle against test cases outside the training distribution.
1 code implementation • IJCNLP 2019 • Andreas Rücklé, Nafise Sadat Moosavi, Iryna Gurevych
We show that our proposed approaches are more effective in many cases because they can utilize larger amounts of unlabeled data from cQA forums.
no code implementations • 19 Sep 2019 • Nafise Sadat Moosavi, Prasetya Ajie Utama, Andreas Rücklé, Iryna Gurevych
Finally, we show that using the coverage information is not only beneficial for improving the performance across different datasets of the same task.
1 code implementation • ACL 2019 • Nafise Sadat Moosavi, Leo Born, Massimo Poesio, Michael Strube
To address this problem, minimum spans are manually annotated in smaller corpora.
1 code implementation • EMNLP 2018 • Nafise Sadat Moosavi, Michael Strube
We show that generalization improves only slightly by merely using a set of additional linguistic features.
no code implementations • EMNLP 2017 • Benjamin Heinzerling, Nafise Sadat Moosavi, Michael Strube
Selectional preferences have long been claimed to be essential for coreference resolution.
no code implementations • ACL 2017 • Nafise Sadat Moosavi, Michael Strube
Lexical features are a major source of information in state-of-the-art coreference resolvers.
no code implementations • WS 2017 • Nafise Sadat Moosavi, Michael Strube
Only a year ago, all state-of-the-art coreference resolvers were using an extensive amount of surface features.