Search Results for author: Nafise Sadat Moosavi

Found 35 papers, 22 papers with code

Evaluating Coreference Resolvers on Community-based Question Answering: From Rule-based to State of the Art

1 code implementation COLING (CRAC) 2022 Haixia Chai, Nafise Sadat Moosavi, Iryna Gurevych, Michael Strube

The results of our extrinsic evaluation show that while there is a significant difference between the performance of the rule-based system vs. state-of-the-art neural model on coreference resolution datasets, we do not observe a considerable difference on their impact on downstream models.

Answer Selection coreference-resolution +1

The Universal Anaphora Scorer

no code implementations LREC 2022 Juntao Yu, Sopan Khosla, Nafise Sadat Moosavi, Silviu Paun, Sameer Pradhan, Massimo Poesio

It also supports the evaluation of split antecedent anaphora and discourse deixis, for which no tools existed.

How to Leverage Digit Embeddings to Represent Numbers?

no code implementations1 Jul 2024 Jasivan Alex Sivakumar, Nafise Sadat Moosavi

Apart from performing arithmetic operations, understanding numbers themselves is still a challenge for existing language models.

Beyond Hate Speech: NLP's Challenges and Opportunities in Uncovering Dehumanizing Language

no code implementations21 Feb 2024 Hezhao Zhang, Lasana Harris, Nafise Sadat Moosavi

Dehumanization, characterized as a subtle yet harmful manifestation of hate speech, involves denying individuals of their human qualities and often results in violence against marginalized groups.

Decoding News Narratives: A Critical Analysis of Large Language Models in Framing Detection

no code implementations18 Feb 2024 Valeria Pastorino, Jasivan A. Sivakumar, Nafise Sadat Moosavi

Previous studies on framing have relied on manual analysis or fine-tuning models with limited annotated datasets.

Bias Detection

LLMs as Narcissistic Evaluators: When Ego Inflates Evaluation Scores

no code implementations16 Nov 2023 Yiqi Liu, Nafise Sadat Moosavi, Chenghua Lin

Automatic evaluation of generated textual content presents an ongoing challenge within the field of NLP.

Language Modelling

Learning From Free-Text Human Feedback -- Collect New Datasets Or Extend Existing Ones?

1 code implementation24 Oct 2023 Dominic Petrak, Nafise Sadat Moosavi, Ye Tian, Nikolai Rozanov, Iryna Gurevych

Learning from free-text human feedback is essential for dialog systems, but annotated data is scarce and usually covers only a small fraction of error types known in conversational AI.

Chatbot Response Generation +1

FERMAT: An Alternative to Accuracy for Numerical Reasoning

1 code implementation27 May 2023 Jasivan Alex Sivakumar, Nafise Sadat Moosavi

Apart from providing a comprehensive evaluation of models on different numerical reasoning aspects, FERMAT enables a systematic and automated generation of an arbitrarily large training or evaluation set for each aspect. The datasets and codes are publicly available to generate further multi-view data for ulterior tasks and languages.

Layer or Representation Space: What makes BERT-based Evaluation Metrics Robust?

1 code implementation COLING 2022 Doan Nam Long Vu, Nafise Sadat Moosavi, Steffen Eger

The evaluation of recent embedding-based evaluation metrics for text generation is primarily based on measuring their correlation with human evaluations on standard benchmarks.

Text Generation Word Embeddings

Transformers with Learnable Activation Functions

2 code implementations30 Aug 2022 Haishuo Fang, Ji-Ung Lee, Nafise Sadat Moosavi, Iryna Gurevych

In contrast to conventional, predefined activation functions, RAFs can adaptively learn optimal activation functions during training according to input data.

Scoring Coreference Chains with Split-Antecedent Anaphors

1 code implementation24 May 2022 Silviu Paun, Juntao Yu, Nafise Sadat Moosavi, Massimo Poesio

Anaphoric reference is an aspect of language interpretation covering a variety of types of interpretation beyond the simple case of identity reference to entities introduced via nominal expressions covered by the traditional coreference task in its most recent incarnation in ONTONOTES and similar datasets.

Arithmetic-Based Pretraining -- Improving Numeracy of Pretrained Language Models

2 code implementations13 May 2022 Dominic Petrak, Nafise Sadat Moosavi, Iryna Gurevych

In this paper, we propose a new extended pretraining approach called Arithmetic-Based Pretraining that jointly addresses both in one extended pretraining step without requiring architectural changes or pretraining from scratch.

Contrastive Learning Reading Comprehension +1

Adaptable Adapters

1 code implementation NAACL 2022 Nafise Sadat Moosavi, Quentin Delfosse, Kristian Kersting, Iryna Gurevych

The resulting adapters (a) contain about 50% of the learning parameters of the standard adapter and are therefore more efficient at training and inference, and require less storage space, and (b) achieve considerably higher performances in low-data settings.

NL-Augmenter: A Framework for Task-Sensitive Natural Language Augmentation

2 code implementations6 Dec 2021 Kaustubh D. Dhole, Varun Gangal, Sebastian Gehrmann, Aadesh Gupta, Zhenhao Li, Saad Mahamood, Abinaya Mahendiran, Simon Mille, Ashish Shrivastava, Samson Tan, Tongshuang Wu, Jascha Sohl-Dickstein, Jinho D. Choi, Eduard Hovy, Ondrej Dusek, Sebastian Ruder, Sajant Anand, Nagender Aneja, Rabin Banjade, Lisa Barthe, Hanna Behnke, Ian Berlot-Attwell, Connor Boyle, Caroline Brun, Marco Antonio Sobrevilla Cabezudo, Samuel Cahyawijaya, Emile Chapuis, Wanxiang Che, Mukund Choudhary, Christian Clauss, Pierre Colombo, Filip Cornell, Gautier Dagan, Mayukh Das, Tanay Dixit, Thomas Dopierre, Paul-Alexis Dray, Suchitra Dubey, Tatiana Ekeinhor, Marco Di Giovanni, Tanya Goyal, Rishabh Gupta, Louanes Hamla, Sang Han, Fabrice Harel-Canada, Antoine Honore, Ishan Jindal, Przemyslaw K. Joniak, Denis Kleyko, Venelin Kovatchev, Kalpesh Krishna, Ashutosh Kumar, Stefan Langer, Seungjae Ryan Lee, Corey James Levinson, Hualou Liang, Kaizhao Liang, Zhexiong Liu, Andrey Lukyanenko, Vukosi Marivate, Gerard de Melo, Simon Meoni, Maxime Meyer, Afnan Mir, Nafise Sadat Moosavi, Niklas Muennighoff, Timothy Sum Hon Mun, Kenton Murray, Marcin Namysl, Maria Obedkova, Priti Oli, Nivranshu Pasricha, Jan Pfister, Richard Plant, Vinay Prabhu, Vasile Pais, Libo Qin, Shahab Raji, Pawan Kumar Rajpoot, Vikas Raunak, Roy Rinberg, Nicolas Roberts, Juan Diego Rodriguez, Claude Roux, Vasconcellos P. H. S., Ananya B. Sai, Robin M. Schmidt, Thomas Scialom, Tshephisho Sefara, Saqib N. Shamsi, Xudong Shen, Haoyue Shi, Yiwen Shi, Anna Shvets, Nick Siegel, Damien Sileo, Jamie Simon, Chandan Singh, Roman Sitelew, Priyank Soni, Taylor Sorensen, William Soto, Aman Srivastava, KV Aditya Srivatsa, Tony Sun, Mukund Varma T, A Tabassum, Fiona Anting Tan, Ryan Teehan, Mo Tiwari, Marie Tolkiehn, Athena Wang, Zijian Wang, Gloria Wang, Zijie J. Wang, Fuxuan Wei, Bryan Wilie, Genta Indra Winata, Xinyi Wu, Witold Wydmański, Tianbao Xie, Usama Yaseen, Michael A. Yee, Jing Zhang, Yue Zhang

Data augmentation is an important component in the robustness evaluation of models in natural language processing (NLP) and in enhancing the diversity of the data they are trained on.

Data Augmentation Diversity

Avoiding Inference Heuristics in Few-shot Prompt-based Finetuning

1 code implementation EMNLP 2021 Prasetya Ajie Utama, Nafise Sadat Moosavi, Victor Sanh, Iryna Gurevych

Recent prompt-based approaches allow pretrained language models to achieve strong performances on few-shot finetuning by reformulating downstream tasks as a language modeling problem.

Language Modelling Sentence +1

Learning to Reason for Text Generation from Scientific Tables

1 code implementation16 Apr 2021 Nafise Sadat Moosavi, Andreas Rücklé, Dan Roth, Iryna Gurevych

In this paper, we introduce SciGen, a new challenge dataset for the task of reasoning-aware data-to-text generation consisting of tables from scientific articles and their corresponding descriptions.

Arithmetic Reasoning Data-to-Text Generation

Stay Together: A System for Single and Split-antecedent Anaphora Resolution

1 code implementation NAACL 2021 Juntao Yu, Nafise Sadat Moosavi, Silviu Paun, Massimo Poesio

Split-antecedent anaphora is rarer and more complex to resolve than single-antecedent anaphora; as a result, it is not annotated in many datasets designed to test coreference, and previous work on resolving this type of anaphora was carried out in unrealistic conditions that assume gold mentions and/or gold split-antecedent anaphors are available.

Coreference Reasoning in Machine Reading Comprehension

1 code implementation ACL 2021 Mingzhu Wu, Nafise Sadat Moosavi, Dan Roth, Iryna Gurevych

We propose a methodology for creating MRC datasets that better reflect the challenges of coreference reasoning and use it to create a sample evaluation set.

coreference-resolution Machine Reading Comprehension +2

Improving Robustness by Augmenting Training Sentences with Predicate-Argument Structures

no code implementations23 Oct 2020 Nafise Sadat Moosavi, Marcel de Boer, Prasetya Ajie Utama, Iryna Gurevych

Existing approaches to improve robustness against dataset biases mostly focus on changing the training objective so that models learn less from biased examples.

Data Augmentation Sentence

Towards Debiasing NLU Models from Unknown Biases

1 code implementation EMNLP 2020 Prasetya Ajie Utama, Nafise Sadat Moosavi, Iryna Gurevych

Recently proposed debiasing methods are shown to be effective in mitigating this tendency.

Mind the Trade-off: Debiasing NLU Models without Degrading the In-distribution Performance

1 code implementation ACL 2020 Prasetya Ajie Utama, Nafise Sadat Moosavi, Iryna Gurevych

Models for natural language understanding (NLU) tasks often rely on the idiosyncratic biases of the dataset, which make them brittle against test cases outside the training distribution.

Diversity Natural Language Understanding

Neural Duplicate Question Detection without Labeled Training Data

1 code implementation IJCNLP 2019 Andreas Rücklé, Nafise Sadat Moosavi, Iryna Gurevych

We show that our proposed approaches are more effective in many cases because they can utilize larger amounts of unlabeled data from cQA forums.

Answer Selection Community Question Answering +1

Improving Generalization by Incorporating Coverage in Natural Language Inference

no code implementations19 Sep 2019 Nafise Sadat Moosavi, Prasetya Ajie Utama, Andreas Rücklé, Iryna Gurevych

Finally, we show that using the coverage information is not only beneficial for improving the performance across different datasets of the same task.

Natural Language Inference Relation

Cannot find the paper you are looking for? You can Submit a new open access paper.