no code implementations • 15 Aug 2022 • Nima Sadri
Our framework (1) embeds the documents and queries; (2) for each query-document pair, computes the relevance score based on the dot product of the document and query embedding; (3) uses the $\texttt{dev}$ set of the MSMARCO dataset to evaluate the models; (4) uses the $\texttt{trec_eval}$ script to calculate MRR@100, which is the primary metric used to evaluate the models.
no code implementations • 15 Aug 2022 • Nima Sadri, Gordon V. Cormack
Pre-trained and fine-tuned transformer models like BERT and T5 have improved the state of the art in ad-hoc retrieval and question-answering, but not as yet in high-recall information retrieval, where the objective is to retrieve substantially all relevant documents.
no code implementations • 13 Aug 2021 • Nima Sadri, Bohan Zhang, Bihan Liu
We test our model on a testing set from the AMI dataset and report the ROUGE-2 score of the generated summary to compare with previous literature.