Search Results for author: Nafise Sadat Moosavi

Found 30 papers, 19 papers with code

Evaluating Coreference Resolvers on Community-based Question Answering: From Rule-based to State of the Art

1 code implementation COLING (CRAC) 2022 Haixia Chai, Nafise Sadat Moosavi, Iryna Gurevych, Michael Strube

The results of our extrinsic evaluation show that while there is a significant difference between the performance of the rule-based system vs. state-of-the-art neural model on coreference resolution datasets, we do not observe a considerable difference on their impact on downstream models.

Answer Selection coreference-resolution +2

The Universal Anaphora Scorer

no code implementations LREC 2022 Juntao Yu, Sopan Khosla, Nafise Sadat Moosavi, Silviu Paun, Sameer Pradhan, Massimo Poesio

It also supports the evaluation of split antecedent anaphora and discourse deixis, for which no tools existed.

FERMAT: An Alternative to Accuracy for Numerical Reasoning

no code implementations27 May 2023 Jasivan Alex Sivakumar, Nafise Sadat Moosavi

Apart from providing a comprehensive evaluation of models on different numerical reasoning aspects, FERMAT enables a systematic and automated generation of an arbitrarily large training or evaluation set for each aspect. The datasets and codes are publicly available to generate further multi-view data for ulterior tasks and languages.

Layer or Representation Space: What makes BERT-based Evaluation Metrics Robust?

1 code implementation COLING 2022 Doan Nam Long Vu, Nafise Sadat Moosavi, Steffen Eger

The evaluation of recent embedding-based evaluation metrics for text generation is primarily based on measuring their correlation with human evaluations on standard benchmarks.

Text Generation Word Embeddings

Transformers with Learnable Activation Functions

2 code implementations30 Aug 2022 Haishuo Fang, Ji-Ung Lee, Nafise Sadat Moosavi, Iryna Gurevych

In contrast to conventional, predefined activation functions, RAFs can adaptively learn optimal activation functions during training according to input data.

Scoring Coreference Chains with Split-Antecedent Anaphors

1 code implementation24 May 2022 Silviu Paun, Juntao Yu, Nafise Sadat Moosavi, Massimo Poesio

Anaphoric reference is an aspect of language interpretation covering a variety of types of interpretation beyond the simple case of identity reference to entities introduced via nominal expressions covered by the traditional coreference task in its most recent incarnation in ONTONOTES and similar datasets.

Improving the Numerical Reasoning Skills of Pretrained Language Models

no code implementations13 May 2022 Dominic Petrak, Nafise Sadat Moosavi, Iryna Gurevych

We evaluate our approach on three different tasks that require numerical reasoning, including (a) reading comprehension in the DROP dataset, (b) inference-on-tables in the InfoTabs dataset, and (c) table-to-text generation in WikiBio and SciGen datasets.

Contrastive Learning Reading Comprehension +1

Adaptable Adapters

1 code implementation NAACL 2022 Nafise Sadat Moosavi, Quentin Delfosse, Kristian Kersting, Iryna Gurevych

The resulting adapters (a) contain about 50% of the learning parameters of the standard adapter and are therefore more efficient at training and inference, and require less storage space, and (b) achieve considerably higher performances in low-data settings.

NL-Augmenter: A Framework for Task-Sensitive Natural Language Augmentation

2 code implementations6 Dec 2021 Kaustubh D. Dhole, Varun Gangal, Sebastian Gehrmann, Aadesh Gupta, Zhenhao Li, Saad Mahamood, Abinaya Mahendiran, Simon Mille, Ashish Shrivastava, Samson Tan, Tongshuang Wu, Jascha Sohl-Dickstein, Jinho D. Choi, Eduard Hovy, Ondrej Dusek, Sebastian Ruder, Sajant Anand, Nagender Aneja, Rabin Banjade, Lisa Barthe, Hanna Behnke, Ian Berlot-Attwell, Connor Boyle, Caroline Brun, Marco Antonio Sobrevilla Cabezudo, Samuel Cahyawijaya, Emile Chapuis, Wanxiang Che, Mukund Choudhary, Christian Clauss, Pierre Colombo, Filip Cornell, Gautier Dagan, Mayukh Das, Tanay Dixit, Thomas Dopierre, Paul-Alexis Dray, Suchitra Dubey, Tatiana Ekeinhor, Marco Di Giovanni, Tanya Goyal, Rishabh Gupta, Louanes Hamla, Sang Han, Fabrice Harel-Canada, Antoine Honore, Ishan Jindal, Przemyslaw K. Joniak, Denis Kleyko, Venelin Kovatchev, Kalpesh Krishna, Ashutosh Kumar, Stefan Langer, Seungjae Ryan Lee, Corey James Levinson, Hualou Liang, Kaizhao Liang, Zhexiong Liu, Andrey Lukyanenko, Vukosi Marivate, Gerard de Melo, Simon Meoni, Maxime Meyer, Afnan Mir, Nafise Sadat Moosavi, Niklas Muennighoff, Timothy Sum Hon Mun, Kenton Murray, Marcin Namysl, Maria Obedkova, Priti Oli, Nivranshu Pasricha, Jan Pfister, Richard Plant, Vinay Prabhu, Vasile Pais, Libo Qin, Shahab Raji, Pawan Kumar Rajpoot, Vikas Raunak, Roy Rinberg, Nicolas Roberts, Juan Diego Rodriguez, Claude Roux, Vasconcellos P. H. S., Ananya B. Sai, Robin M. Schmidt, Thomas Scialom, Tshephisho Sefara, Saqib N. Shamsi, Xudong Shen, Haoyue Shi, Yiwen Shi, Anna Shvets, Nick Siegel, Damien Sileo, Jamie Simon, Chandan Singh, Roman Sitelew, Priyank Soni, Taylor Sorensen, William Soto, Aman Srivastava, KV Aditya Srivatsa, Tony Sun, Mukund Varma T, A Tabassum, Fiona Anting Tan, Ryan Teehan, Mo Tiwari, Marie Tolkiehn, Athena Wang, Zijian Wang, Gloria Wang, Zijie J. Wang, Fuxuan Wei, Bryan Wilie, Genta Indra Winata, Xinyi Wu, Witold Wydmański, Tianbao Xie, Usama Yaseen, Michael A. Yee, Jing Zhang, Yue Zhang

Data augmentation is an important component in the robustness evaluation of models in natural language processing (NLP) and in enhancing the diversity of the data they are trained on.

Data Augmentation

Avoiding Inference Heuristics in Few-shot Prompt-based Finetuning

1 code implementation EMNLP 2021 Prasetya Ajie Utama, Nafise Sadat Moosavi, Victor Sanh, Iryna Gurevych

Recent prompt-based approaches allow pretrained language models to achieve strong performances on few-shot finetuning by reformulating downstream tasks as a language modeling problem.

Language Modelling

Learning to Reason for Text Generation from Scientific Tables

1 code implementation16 Apr 2021 Nafise Sadat Moosavi, Andreas Rücklé, Dan Roth, Iryna Gurevych

In this paper, we introduce SciGen, a new challenge dataset for the task of reasoning-aware data-to-text generation consisting of tables from scientific articles and their corresponding descriptions.

Arithmetic Reasoning Data-to-Text Generation

Stay Together: A System for Single and Split-antecedent Anaphora Resolution

1 code implementation NAACL 2021 Juntao Yu, Nafise Sadat Moosavi, Silviu Paun, Massimo Poesio

Split-antecedent anaphora is rarer and more complex to resolve than single-antecedent anaphora; as a result, it is not annotated in many datasets designed to test coreference, and previous work on resolving this type of anaphora was carried out in unrealistic conditions that assume gold mentions and/or gold split-antecedent anaphors are available.

Coreference Reasoning in Machine Reading Comprehension

1 code implementation ACL 2021 Mingzhu Wu, Nafise Sadat Moosavi, Dan Roth, Iryna Gurevych

We propose a methodology for creating MRC datasets that better reflect the challenges of coreference reasoning and use it to create a sample evaluation set.

coreference-resolution Coreference Resolution +3

Improving Robustness by Augmenting Training Sentences with Predicate-Argument Structures

no code implementations23 Oct 2020 Nafise Sadat Moosavi, Marcel de Boer, Prasetya Ajie Utama, Iryna Gurevych

Existing approaches to improve robustness against dataset biases mostly focus on changing the training objective so that models learn less from biased examples.

Data Augmentation

Towards Debiasing NLU Models from Unknown Biases

1 code implementation EMNLP 2020 Prasetya Ajie Utama, Nafise Sadat Moosavi, Iryna Gurevych

Recently proposed debiasing methods are shown to be effective in mitigating this tendency.

Mind the Trade-off: Debiasing NLU Models without Degrading the In-distribution Performance

1 code implementation ACL 2020 Prasetya Ajie Utama, Nafise Sadat Moosavi, Iryna Gurevych

Models for natural language understanding (NLU) tasks often rely on the idiosyncratic biases of the dataset, which make them brittle against test cases outside the training distribution.

Natural Language Understanding

Neural Duplicate Question Detection without Labeled Training Data

1 code implementation IJCNLP 2019 Andreas Rücklé, Nafise Sadat Moosavi, Iryna Gurevych

We show that our proposed approaches are more effective in many cases because they can utilize larger amounts of unlabeled data from cQA forums.

Answer Selection Community Question Answering +1

Improving Generalization by Incorporating Coverage in Natural Language Inference

no code implementations19 Sep 2019 Nafise Sadat Moosavi, Prasetya Ajie Utama, Andreas Rücklé, Iryna Gurevych

Finally, we show that using the coverage information is not only beneficial for improving the performance across different datasets of the same task.

Natural Language Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.