Search Results for author: Stan Peshterliev

Found 6 papers, 2 papers with code

Salient Phrase Aware Dense Retrieval: Can a Dense Retriever Imitate a Sparse One?

2 code implementations13 Oct 2021 Xilun Chen, Kushal Lakhotia, Barlas Oğuz, Anchit Gupta, Patrick Lewis, Stan Peshterliev, Yashar Mehdad, Sonal Gupta, Wen-tau Yih

Despite their recent popularity and well-known advantages, dense retrievers still lag behind sparse methods such as BM25 in their ability to reliably match salient phrases and rare entities in the query and to generalize to out-of-domain data.

Open-Domain Question Answering Passage Retrieval +1

Decoupled Transformer for Scalable Inference in Open-domain Question Answering

no code implementations5 Aug 2021 Haytham ElFadeel, Stan Peshterliev

To reduce computational cost and latency, we propose decoupling the transformer MRC model into input-component and cross-component.

Knowledge Distillation Machine Reading Comprehension +1

Robustly Optimized and Distilled Training for Natural Language Understanding

no code implementations16 Mar 2021 Haytham ElFadeel, Stan Peshterliev

In this paper, we explore multi-task learning (MTL) as a second pretraining step to learn enhanced universal language representation for transformer language models.

Knowledge Distillation Machine Reading Comprehension +3

Cannot find the paper you are looking for? You can Submit a new open access paper.