Search Results for author: Nima Sadri

Found 3 papers, 0 papers with code

Evaluating Dense Passage Retrieval using Transformers

no code implementations15 Aug 2022 Nima Sadri

Our framework (1) embeds the documents and queries; (2) for each query-document pair, computes the relevance score based on the dot product of the document and query embedding; (3) uses the $\texttt{dev}$ set of the MSMARCO dataset to evaluate the models; (4) uses the $\texttt{trec_eval}$ script to calculate MRR@100, which is the primary metric used to evaluate the models.

Passage Retrieval Retrieval

Continuous Active Learning Using Pretrained Transformers

no code implementations15 Aug 2022 Nima Sadri, Gordon V. Cormack

Pre-trained and fine-tuned transformer models like BERT and T5 have improved the state of the art in ad-hoc retrieval and question-answering, but not as yet in high-recall information retrieval, where the objective is to retrieve substantially all relevant documents.

Active Learning Information Retrieval +2

MeetSum: Transforming Meeting Transcript Summarization using Transformers!

no code implementations13 Aug 2021 Nima Sadri, Bohan Zhang, Bihan Liu

We test our model on a testing set from the AMI dataset and report the ROUGE-2 score of the generated summary to compare with previous literature.

Zero-Shot Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.