Search Results for author: Jiahuan Pei

Found 9 papers, 6 papers with code

Mini-Ensemble Low-Rank Adapters for Parameter-Efficient Fine-Tuning

no code implementations27 Feb 2024 Pengjie Ren, Chengshun Shi, Shiguang Wu, Mengqi Zhang, Zhaochun Ren, Maarten de Rijke, Zhumin Chen, Jiahuan Pei

Parameter-efficient fine-tuning (PEFT) is a popular method for tailoring pre-trained large language models (LLMs), especially as the models' scale and the diversity of tasks increase.

Instruction Following Natural Language Understanding

Intent-calibrated Self-training for Answer Selection in Open-domain Dialogues

no code implementations13 Jul 2023 Wentao Deng, Jiahuan Pei, Zhaochun Ren, Zhumin Chen, Pengjie Ren

Specifically, it improves 2. 06% and 1. 00% of F1 score on the two datasets, compared with the strongest baseline with only 5% labeled data.

Answer Selection

Transformer Uncertainty Estimation with Hierarchical Stochastic Attention

1 code implementation27 Dec 2021 Jiahuan Pei, Cheng Wang, György Szarvas

In this work, we propose a novel way to enable transformers to have the capability of uncertainty estimation and, meanwhile, retain the original predictive performance.

Medical Diagnosis text-classification +1

Pre-trained Language Models in Biomedical Domain: A Systematic Survey

1 code implementation11 Oct 2021 Benyou Wang, Qianqian Xie, Jiahuan Pei, Zhihong Chen, Prayag Tiwari, Zhao Li, Jie Fu

In this paper, we summarize the recent progress of pre-trained language models in the biomedical domain and their applications in biomedical downstream tasks.

ReMeDi: Resources for Multi-domain, Multi-service, Medical Dialogues

1 code implementation1 Sep 2021 Guojun Yan, Jiahuan Pei, Pengjie Ren, Zhaochun Ren, Xin Xin, Huasheng Liang, Maarten de Rijke, Zhumin Chen

(1) there is no dataset with large-scale medical dialogues that covers multiple medical services and contains fine-grained medical labels (i. e., intents, actions, slots, values), and (2) there is no set of established benchmarks for MDSs for multi-domain, multi-service medical dialogues.

Benchmarking Contrastive Learning +2

A Cooperative Memory Network for Personalized Task-oriented Dialogue Systems with Incomplete User Profiles

1 code implementation16 Feb 2021 Jiahuan Pei, Pengjie Ren, Maarten de Rijke

We find that CoMemNN is able to enrich user profiles effectively, which results in an improvement of 3. 06% in terms of response selection accuracy compared to state-of-the-art methods.

Attribute Task-Oriented Dialogue Systems

Retrospective and Prospective Mixture-of-Generators for Task-oriented Dialogue Response Generation

2 code implementations19 Nov 2019 Jiahuan Pei, Pengjie Ren, Christof Monz, Maarten de Rijke

We propose a novel mixture-of-generators network (MoGNet) for DRG, where we assume that each token of a response is drawn from a mixture of distributions.

Response Generation Task-Oriented Dialogue Systems

A Modular Task-oriented Dialogue System Using a Neural Mixture-of-Experts

1 code implementation10 Jul 2019 Jiahuan Pei, Pengjie Ren, Maarten de Rijke

We propose a neural Modular Task-oriented Dialogue System(MTDS) framework, in which a few expert bots are combined to generate the response for a given dialogue context.

Task-Oriented Dialogue Systems

SEntNet: Source-aware Recurrent Entity Network for Dialogue Response Selection

no code implementations16 Jun 2019 Jiahuan Pei, Arent Stienstra, Julia Kiseleva, Maarten de Rijke

Obtaining key information from a complex, long dialogue context is challenging, especially when different sources of information are available, e. g., the user's utterances, the system's responses, and results retrieved from a knowledge base (KB).

Task-Oriented Dialogue Systems

Cannot find the paper you are looking for? You can Submit a new open access paper.