Search Results for author: Yanzhao Zhang

Found 9 papers, 7 papers with code

Chinese Sequence Labeling with Semi-Supervised Boundary-Aware Language Model Pre-training

2 code implementations8 Apr 2024 Longhui Zhang, Dingkun Long, Meishan Zhang, Yanzhao Zhang, Pengjun Xie, Min Zhang

Experimental results on Chinese sequence labeling datasets demonstrate that the improved BABERT variant outperforms the vanilla version, not only on these tasks but also more broadly across a range of Chinese natural language understanding tasks.

Language Modelling Natural Language Understanding

TSRankLLM: A Two-Stage Adaptation of LLMs for Text Ranking

1 code implementation28 Nov 2023 Longhui Zhang, Yanzhao Zhang, Dingkun Long, Pengjun Xie, Meishan Zhang, Min Zhang

Text ranking is a critical task in various information retrieval applications, and the recent success of pre-trained language models (PLMs), especially large language models (LLMs), has sparked interest in their application to text ranking.

Information Retrieval Retrieval

Language Models are Universal Embedders

1 code implementation12 Oct 2023 Xin Zhang, Zehan Li, Yanzhao Zhang, Dingkun Long, Pengjun Xie, Meishan Zhang, Min Zhang

As such cases span from English to other natural or programming languages, from retrieval to classification and beyond, it is desirable to build a unified embedding model rather than dedicated ones for each scenario.

Code Search Language Modelling +2

Challenging Decoder helps in Masked Auto-Encoder Pre-training for Dense Passage Retrieval

no code implementations22 May 2023 Zehan Li, Yanzhao Zhang, Dingkun Long, Pengjun Xie

Recently, various studies have been directed towards exploring dense passage retrieval techniques employing pre-trained language models, among which the masked auto-encoder (MAE) pre-training architecture has emerged as the most promising.

Passage Retrieval Retrieval

Retrieval Oriented Masking Pre-training Language Model for Dense Passage Retrieval

1 code implementation27 Oct 2022 Dingkun Long, Yanzhao Zhang, Guangwei Xu, Pengjun Xie

Pre-trained language model (PTM) has been shown to yield powerful text representations for dense passage retrieval task.

Language Modelling Masked Language Modeling +2

HLATR: Enhance Multi-stage Text Retrieval with Hybrid List Aware Transformer Reranking

1 code implementation21 May 2022 Yanzhao Zhang, Dingkun Long, Guangwei Xu, Pengjun Xie

Existing text retrieval systems with state-of-the-art performance usually adopt a retrieve-then-reranking architecture due to the high computational cost of pre-trained language models and the large corpus size.

Passage Ranking Passage Re-Ranking +2

Cannot find the paper you are looking for? You can Submit a new open access paper.