Search Results for author: Jiaxin Mao

Found 9 papers, 7 papers with code

Learning Discrete Representations via Constrained Clustering for Effective and Efficient Dense Retrieval

3 code implementations12 Oct 2021 Jingtao Zhan, Jiaxin Mao, Yiqun Liu, Jiafeng Guo, Min Zhang, Shaoping Ma

However, the efficiency of most existing DR models is limited by the large memory cost of storing dense vectors and the time-consuming nearest neighbor search (NNS) in vector space.

Quantization

POSSCORE: A Simple Yet Effective Evaluation of Conversational Search with Part of Speech Labelling

1 code implementation7 Sep 2021 Zeyang Liu, Ke Zhou, Jiaxin Mao, Max L. Wilson

Conversational search systems, such as Google Assistant and Microsoft Cortana, provide a new search paradigm where users are allowed, via natural language dialogues, to communicate with search systems.

Conversational Search POS

Jointly Optimizing Query Encoder and Product Quantization to Improve Retrieval Performance

3 code implementations2 Aug 2021 Jingtao Zhan, Jiaxin Mao, Yiqun Liu, Jiafeng Guo, Min Zhang, Shaoping Ma

Compared with previous DR models that use brute-force search, JPQ almost matches the best retrieval performance with 30x compression on index size.

Information Retrieval Quantization

Optimizing Dense Retrieval Model Training with Hard Negatives

2 code implementations16 Apr 2021 Jingtao Zhan, Jiaxin Mao, Yiqun Liu, Jiafeng Guo, Min Zhang, Shaoping Ma

ADORE replaces the widely-adopted static hard negative sampling method with a dynamic one to directly optimize the ranking performance.

Information Retrieval Representation Learning

THUIR@COLIEE-2020: Leveraging Semantic Understanding and Exact Matching for Legal Case Retrieval and Entailment

no code implementations24 Dec 2020 Yunqiu Shao, Bulou Liu, Jiaxin Mao, Yiqun Liu, Min Zhang, Shaoping Ma

We participated in the two case law tasks, i. e., the legal case retrieval task and the legal case entailment task.

Learning To Retrieve: How to Train a Dense Retrieval Model Effectively and Efficiently

2 code implementations20 Oct 2020 Jingtao Zhan, Jiaxin Mao, Yiqun Liu, Min Zhang, Shaoping Ma

Through this process, it teaches the DR model how to retrieve relevant documents from the entire corpus instead of how to rerank a potentially biased sample of documents.

Information Retrieval Passage Retrieval

Neural Logic Reasoning

1 code implementation20 Aug 2020 Shaoyun Shi, Hanxiong Chen, Weizhi Ma, Jiaxin Mao, Min Zhang, Yongfeng Zhang

Both reasoning and generalization ability are important for prediction tasks such as recommender systems, where reasoning provides strong connection between user history and target items for accurate prediction, and generalization helps the model to draw a robust user portrait over noisy inputs.

Recommendation Systems

RepBERT: Contextualized Text Embeddings for First-Stage Retrieval

1 code implementation28 Jun 2020 Jingtao Zhan, Jiaxin Mao, Yiqun Liu, Min Zhang, Shaoping Ma

Although exact term match between queries and documents is the dominant method to perform first-stage retrieval, we propose a different approach, called RepBERT, to represent documents and queries with fixed-length contextualized embeddings.

Unbiased Learning to Rank: Online or Offline?

no code implementations28 Apr 2020 Qingyao Ai, Tao Yang, Huazheng Wang, Jiaxin Mao

While their definitions of \textit{unbiasness} are different, these two types of ULTR algorithms share the same goal -- to find the best models that rank documents based on their intrinsic relevance or utility.

Learning-To-Rank

Cannot find the paper you are looking for? You can Submit a new open access paper.