Search Results for author: Longhui Zhang

Found 8 papers, 7 papers with code

Chinese Sequence Labeling with Semi-Supervised Boundary-Aware Language Model Pre-training

2 code implementations8 Apr 2024 Longhui Zhang, Dingkun Long, Meishan Zhang, Yanzhao Zhang, Pengjun Xie, Min Zhang

Experimental results on Chinese sequence labeling datasets demonstrate that the improved BABERT variant outperforms the vanilla version, not only on these tasks but also more broadly across a range of Chinese natural language understanding tasks.

Language Modelling Natural Language Understanding

TSRankLLM: A Two-Stage Adaptation of LLMs for Text Ranking

1 code implementation28 Nov 2023 Longhui Zhang, Yanzhao Zhang, Dingkun Long, Pengjun Xie, Meishan Zhang, Min Zhang

Text ranking is a critical task in various information retrieval applications, and the recent success of pre-trained language models (PLMs), especially large language models (LLMs), has sparked interest in their application to text ranking.

Information Retrieval Retrieval

A Simple but Effective Bidirectional Framework for Relational Triple Extraction

1 code implementation9 Dec 2021 Feiliang Ren, Longhui Zhang, Xiaofeng Zhao, Shujuan Yin, Shilei Liu, Bochao Li

Moreover, experiments show that both the proposed bidirectional extraction framework and the share-aware learning mechanism have good adaptability and can be used to improve the performance of other tagging based methods.

A Three-Stage Learning Framework for Low-Resource Knowledge-Grounded Dialogue Generation

1 code implementation EMNLP 2021 Shilei Liu, Xiaofeng Zhao, Bochao Li, Feiliang Ren, Longhui Zhang, Shujuan Yin

Neural conversation models have shown great potentials towards generating fluent and informative responses by introducing external background knowledge.

Dialogue Generation Response Generation +1

A Conditional Cascade Model for Relational Triple Extraction

1 code implementation20 Aug 2021 Feiliang Ren, Longhui Zhang, Shujuan Yin, Xiaofeng Zhao, Shilei Liu, Bochao Li

Tagging based methods are one of the mainstream methods in relational triple extraction.

An Effective System for Multi-format Information Extraction

1 code implementation16 Aug 2021 Yaduo Liu, Longhui Zhang, Shujuan Yin, Xiaofeng Zhao, Feiliang Ren

Finally, our system ranks No. 4 on the test set leader-board of this multi-format information extraction task, and its F1 scores for the subtasks of relation extraction, event extractions of sentence-level and document-level are 79. 887%, 85. 179%, and 70. 828% respectively.

Document-level Event Extraction Multi-Task Learning +4

Cannot find the paper you are looking for? You can Submit a new open access paper.