no code implementations • 22 Oct 2023 • Andrew Drozdov, Honglei Zhuang, Zhuyun Dai, Zhen Qin, Razieh Rahimi, Xuanhui Wang, Dana Alon, Mohit Iyyer, Andrew McCallum, Donald Metzler, Kai Hui
Recent studies show that large language models (LLMs) can be instructed to effectively perform zero-shot passage re-ranking, in which the results of a first stage retrieval method, such as BM25, are rated and reordered to improve relevance.
no code implementations • 24 Dec 2022 • Tanya Chowdhury, Razieh Rahimi, James Allan
In this work, we extend LIME to propose Rank-LIME, a model-agnostic, local, post-hoc linear feature attribution method for the task of learning to rank that generates explanations for ranked lists.
1 code implementation • 28 Oct 2022 • Andrew Drozdov, Shufan Wang, Razieh Rahimi, Andrew McCallum, Hamed Zamani, Mohit Iyyer
Retrieval-enhanced language models (LMs), which condition their predictions on text retrieved from large external datastores, have recently shown significant perplexity improvements compared to standard LMs.
Ranked #9 on Language Modelling on WikiText-103
no code implementations • 2 Nov 2021 • Razieh Rahimi, Youngwoo Kim, Hamed Zamani, James Allan
GenEx explains a search result by providing a terse description for the query aspect covered by that result.
no code implementations • 10 Sep 2021 • Youngwoo Kim, Razieh Rahimi, Hamed Bonab, James Allan
Transformer-based rankers have shown state-of-the-art performance.
no code implementations • 7 Sep 2021 • Zhiqi Huang, Hamed Bonab, Sheikh Muhammad Sarwar, Razieh Rahimi, James Allan
In the monolingual retrieval task, because of the same lexical inputs, it is easier for model to identify the query terms that occurred in documents.