Search Results for author: Sara Rajaee

Found 7 papers, 4 papers with code

Analyzing the Evaluation of Cross-Lingual Knowledge Transfer in Multilingual Language Models

no code implementations3 Feb 2024 Sara Rajaee, Christof Monz

Recent advances in training multilingual language models on large datasets seem to have shown promising results in knowledge transfer across languages and achieve high performance on downstream tasks.

Transfer Learning

Looking at the Overlooked: An Analysis on the Word-Overlap Bias in Natural Language Inference

1 code implementation7 Nov 2022 Sara Rajaee, Yadollah Yaghoobzadeh, Mohammad Taher Pilehvar

It has been shown that NLI models are usually biased with respect to the word-overlap between premise and hypothesis; they take this feature as a primary cue for predicting the entailment label.

Natural Language Inference

On the Importance of Data Size in Probing Fine-tuned Models

1 code implementation Findings (ACL) 2022 Houman Mehrafarin, Sara Rajaee, Mohammad Taher Pilehvar

The analysis also reveals that larger training data mainly affects higher layers, and that the extent of this change is a factor of the number of iterations updating the model during fine-tuning rather than the diversity of the training samples.

How Does Fine-tuning Affect the Geometry of Embedding Space: A Case Study on Isotropy

no code implementations Findings (EMNLP) 2021 Sara Rajaee, Mohammad Taher Pilehvar

It is widely accepted that fine-tuning pre-trained language models usually brings about performance improvements in downstream tasks.

A Cluster-based Approach for Improving Isotropy in Contextual Embedding Space

1 code implementation ACL 2021 Sara Rajaee, Mohammad Taher Pilehvar

Based on this observation, we propose a local cluster-based method to address the degeneration issue in contextual embedding spaces.

Cannot find the paper you are looking for? You can Submit a new open access paper.