Search Results for author: Sara Rajaee

Found 9 papers, 4 papers with code

Local Look-Ahead Guidance via Verifier-in-the-Loop for Automated Theorem Proving

no code implementations12 Mar 2025 Sara Rajaee, Kumar Pratik, Gabriele Cesa, Arash Behboodi

The most promising recent methods for AI reasoning require applying variants of reinforcement learning (RL) either on rolled out trajectories from the model, even for the step-wise rewards, or large quantities of human annotated trajectory data.

Automated Theorem Proving Reinforcement Learning (RL) +1

On the Evaluation Practices in Multilingual NLP: Can Machine Translation Offer an Alternative to Human Translations?

no code implementations20 Jun 2024 Rochelle Choenni, Sara Rajaee, Christof Monz, Ekaterina Shutova

While multilingual language models (MLMs) have been trained on 100+ languages, they are typically only evaluated across a handful of them due to a lack of available test data in most languages.

Machine Translation Multilingual NLP +1

Analyzing the Evaluation of Cross-Lingual Knowledge Transfer in Multilingual Language Models

no code implementations3 Feb 2024 Sara Rajaee, Christof Monz

Recent advances in training multilingual language models on large datasets seem to have shown promising results in knowledge transfer across languages and achieve high performance on downstream tasks.

Transfer Learning

Looking at the Overlooked: An Analysis on the Word-Overlap Bias in Natural Language Inference

1 code implementation7 Nov 2022 Sara Rajaee, Yadollah Yaghoobzadeh, Mohammad Taher Pilehvar

It has been shown that NLI models are usually biased with respect to the word-overlap between premise and hypothesis; they take this feature as a primary cue for predicting the entailment label.

Natural Language Inference

On the Importance of Data Size in Probing Fine-tuned Models

1 code implementation Findings (ACL) 2022 Houman Mehrafarin, Sara Rajaee, Mohammad Taher Pilehvar

The analysis also reveals that larger training data mainly affects higher layers, and that the extent of this change is a factor of the number of iterations updating the model during fine-tuning rather than the diversity of the training samples.

Diversity

How Does Fine-tuning Affect the Geometry of Embedding Space: A Case Study on Isotropy

no code implementations Findings (EMNLP) 2021 Sara Rajaee, Mohammad Taher Pilehvar

It is widely accepted that fine-tuning pre-trained language models usually brings about performance improvements in downstream tasks.

A Cluster-based Approach for Improving Isotropy in Contextual Embedding Space

1 code implementation ACL 2021 Sara Rajaee, Mohammad Taher Pilehvar

Based on this observation, we propose a local cluster-based method to address the degeneration issue in contextual embedding spaces.

Cannot find the paper you are looking for? You can Submit a new open access paper.