Simple and Effective Unsupervised Redundancy Elimination to Compress Dense Vectors for Passage Retrieval

EMNLP 2021  ·  Xueguang Ma, Minghan Li, Kai Sun, Ji Xin, Jimmy Lin ·

Recent work has shown that dense passage retrieval techniques achieve better ranking accuracy in open-domain question answering compared to sparse retrieval techniques such as BM25, but at the cost of large space and memory requirements. In this paper, we analyze the redundancy present in encoded dense vectors and show that the default dimension of 768 is unnecessarily large. To improve space efficiency, we propose a simple unsupervised compression pipeline that consists of principal component analysis (PCA), product quantization, and hybrid search. We further investigate other supervised baselines and find surprisingly that unsupervised PCA outperforms them in some settings. We perform extensive experiments on five question answering datasets and demonstrate that our best pipeline achieves good accuracy–space trade-offs, for example, 48\times compression with less than 3% drop in top-100 retrieval accuracy on average or 96\times compression with less than 4% drop. Code and data are available at http://pyserini.io/.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here