no code implementations • EMNLP (BlackboxNLP) 2021 • Mohsen Fayyaz, Ehsan Aghazadeh, Ali Modarressi, Hosein Mohebbi, Mohammad Taher Pilehvar
Most of the recent works on probing representations have focused on BERT, with the presumption that the findings might be similar to the other models.
1 code implementation • 17 Jun 2025 • Zeinab Sadat Taghavi, Ali Modarressi, Yunpu Ma, Hinrich Schütze
But even with a short context of only ten documents, including the positive document, GPT-4. 1 scores only 35. 06%, showing that document-side reasoning remains a challenge.
no code implementations • 27 May 2025 • Raoyuan Zhao, Abdullatif Köksal, Ali Modarressi, Michael A. Hedderich, Hinrich Schütze
To evaluate these probing methods, in this paper, we propose a new process based on using input variations and quantitative metrics.
no code implementations • 6 Mar 2025 • Mohsen Fayyaz, Ali Modarressi, Hinrich Schuetze, Nanyun Peng
Dense retrieval models are commonly used in Information Retrieval (IR) applications, such as Retrieval-Augmented Generation (RAG).
1 code implementation • 7 Feb 2025 • Ali Modarressi, Hanieh Deilamsalehy, Franck Dernoncourt, Trung Bui, Ryan A. Rossi, Seunghyun Yoon, Hinrich Schütze
A popular method for evaluating these capabilities is the needle-in-a-haystack (NIAH) test, which involves retrieving a "needle" (relevant information) from a "haystack" (long irrelevant context).
1 code implementation • 8 Oct 2024 • Amir Hossein Kargaran, Ali Modarressi, Nafiseh Nikeghbal, Jana Diesner, François Yvon, Hinrich Schütze
This suggests that MEXA is a reliable method for estimating the multilingual capabilities of English-centric LLMs, providing a clearer understanding of their multilingual potential and the inner workings of LLMs.
1 code implementation • 9 Jul 2024 • Ali Modarressi, Abdullatif Köksal, Hinrich Schütze
We first demonstrate that models trained on factual data exhibit inconsistent behavior: while they accurately extract triples from factual data, they fail to extract the same triples after counterfactual modification.
1 code implementation • 17 Apr 2024 • Ali Modarressi, Abdullatif Köksal, Ayyoob Imani, Mohsen Fayyaz, Hinrich Schütze
While current large language models (LLMs) perform well on many knowledge-related tasks, they are limited by relying on their parameters as an implicit storage mechanism.
1 code implementation • 5 Jun 2023 • Ali Modarressi, Mohsen Fayyaz, Ehsan Aghazadeh, Yadollah Yaghoobzadeh, Mohammad Taher Pilehvar
An emerging solution for explaining Transformer-based models is to use vector-based analysis on how the representations are formed.
1 code implementation • 23 May 2023 • Ali Modarressi, Ayyoob Imani, Mohsen Fayyaz, Hinrich Schütze
Large language models (LLMs) have significantly advanced the field of natural language processing (NLP) through their extensive parameters and comprehensive data utilization.
1 code implementation • 6 Feb 2023 • Ali Modarressi, Hossein Amirkhani, Mohammad Taher Pilehvar
A popular workaround is to train a robust model by re-weighting training examples based on a secondary biased model.
no code implementations • 10 Nov 2022 • Mohsen Fayyaz, Ehsan Aghazadeh, Ali Modarressi, Mohammad Taher Pilehvar, Yadollah Yaghoobzadeh, Samira Ebrahimi Kahou
In this work, we employ these two metrics for the first time in NLP.
1 code implementation • NAACL 2022 • Ali Modarressi, Mohsen Fayyaz, Yadollah Yaghoobzadeh, Mohammad Taher Pilehvar
There has been a growing interest in interpreting the underlying dynamics of Transformers.
1 code implementation • ACL 2022 • Ali Modarressi, Hosein Mohebbi, Mohammad Taher Pilehvar
To determine the importance of each token representation, we train a Contribution Predictor for each layer using a gradient-based saliency method.
no code implementations • 13 Sep 2021 • Mohsen Fayyaz, Ehsan Aghazadeh, Ali Modarressi, Hosein Mohebbi, Mohammad Taher Pilehvar
Most of the recent works on probing representations have focused on BERT, with the presumption that the findings might be similar to the other models.
1 code implementation • EMNLP 2021 • Hosein Mohebbi, Ali Modarressi, Mohammad Taher Pilehvar
Several studies have been carried out on revealing linguistic features captured by BERT.