no code implementations • 27 Feb 2024 • Pengjie Ren, Chengshun Shi, Shiguang Wu, Mengqi Zhang, Zhaochun Ren, Maarten de Rijke, Zhumin Chen, Jiahuan Pei
Parameter-efficient fine-tuning (PEFT) is a popular method for tailoring pre-trained large language models (LLMs), especially as the models' scale and the diversity of tasks increase.
no code implementations • 13 Jul 2023 • Wentao Deng, Jiahuan Pei, Zhaochun Ren, Zhumin Chen, Pengjie Ren
Specifically, it improves 2. 06% and 1. 00% of F1 score on the two datasets, compared with the strongest baseline with only 5% labeled data.
1 code implementation • 27 Dec 2021 • Jiahuan Pei, Cheng Wang, György Szarvas
In this work, we propose a novel way to enable transformers to have the capability of uncertainty estimation and, meanwhile, retain the original predictive performance.
1 code implementation • 11 Oct 2021 • Benyou Wang, Qianqian Xie, Jiahuan Pei, Zhihong Chen, Prayag Tiwari, Zhao Li, Jie Fu
In this paper, we summarize the recent progress of pre-trained language models in the biomedical domain and their applications in biomedical downstream tasks.
1 code implementation • 1 Sep 2021 • Guojun Yan, Jiahuan Pei, Pengjie Ren, Zhaochun Ren, Xin Xin, Huasheng Liang, Maarten de Rijke, Zhumin Chen
(1) there is no dataset with large-scale medical dialogues that covers multiple medical services and contains fine-grained medical labels (i. e., intents, actions, slots, values), and (2) there is no set of established benchmarks for MDSs for multi-domain, multi-service medical dialogues.
1 code implementation • 16 Feb 2021 • Jiahuan Pei, Pengjie Ren, Maarten de Rijke
We find that CoMemNN is able to enrich user profiles effectively, which results in an improvement of 3. 06% in terms of response selection accuracy compared to state-of-the-art methods.
2 code implementations • 19 Nov 2019 • Jiahuan Pei, Pengjie Ren, Christof Monz, Maarten de Rijke
We propose a novel mixture-of-generators network (MoGNet) for DRG, where we assume that each token of a response is drawn from a mixture of distributions.
1 code implementation • 10 Jul 2019 • Jiahuan Pei, Pengjie Ren, Maarten de Rijke
We propose a neural Modular Task-oriented Dialogue System(MTDS) framework, in which a few expert bots are combined to generate the response for a given dialogue context.
no code implementations • 16 Jun 2019 • Jiahuan Pei, Arent Stienstra, Julia Kiseleva, Maarten de Rijke
Obtaining key information from a complex, long dialogue context is challenging, especially when different sources of information are available, e. g., the user's utterances, the system's responses, and results retrieved from a knowledge base (KB).