Search Results for author: Dayuan Fu

Found 8 papers, 3 papers with code

DivTOD: Unleashing the Power of LLMs for Diversifying Task-Oriented Dialogue Representations

no code implementations31 Mar 2024 Weihao Zeng, Dayuan Fu, Keqing He, Yejie Wang, Yukai Xu, Weiran Xu

Language models pre-trained on general text have achieved impressive results in diverse fields.

On Large Language Models' Hallucination with Regard to Known Facts

no code implementations29 Mar 2024 Che Jiang, Biqing Qi, Xiangyu Hong, Dayuan Fu, Yang Cheng, Fandong Meng, Mo Yu, BoWen Zhou, Jie zhou

In hallucinated cases, the output token's information rarely demonstrates abrupt increases and consistent superiority in the later stages of the model.

Hallucination

PreAct: Predicting Future in ReAct Enhances Agent's Planning Ability

1 code implementation18 Feb 2024 Dayuan Fu, Jianzhao Huang, Siyuan Lu, Guanting Dong, Yejie Wang, Keqing He, Weiran Xu

Addressing the discrepancies between predictions and actual outcomes often aids individuals in expanding their thought processes and engaging in reflection, thereby facilitating reasoning in the correct direction.

Language Modelling Large Language Model

A Prototypical Semantic Decoupling Method via Joint Contrastive Learning for Few-Shot Name Entity Recognition

no code implementations27 Feb 2023 Guanting Dong, Zechen Wang, LiWen Wang, Daichi Guo, Dayuan Fu, Yuxiang Wu, Chen Zeng, Xuefeng Li, Tingfeng Hui, Keqing He, Xinyue Cui, QiXiang Gao, Weiran Xu

Specifically, we decouple class-specific prototypes and contextual semantic prototypes by two masking strategies to lead the model to focus on two different semantic information for inference.

Contrastive Learning few-shot-ner +4

Revisit Out-Of-Vocabulary Problem for Slot Filling: A Unified Contrastive Frameword with Multi-level Data Augmentations

no code implementations27 Feb 2023 Daichi Guo, Guanting Dong, Dayuan Fu, Yuxiang Wu, Chen Zeng, Tingfeng Hui, LiWen Wang, Xuefeng Li, Zechen Wang, Keqing He, Xinyue Cui, Weiran Xu

In real dialogue scenarios, the existing slot filling model, which tends to memorize entity patterns, has a significantly reduced generalization facing Out-of-Vocabulary (OOV) problems.

Contrastive Learning slot-filling +1

Semi-Supervised Knowledge-Grounded Pre-training for Task-Oriented Dialog Systems

1 code implementation17 Oct 2022 Weihao Zeng, Keqing He, Zechen Wang, Dayuan Fu, Guanting Dong, Ruotong Geng, Pei Wang, Jingang Wang, Chaobo Sun, Wei Wu, Weiran Xu

Recent advances in neural approaches greatly improve task-oriented dialogue (TOD) systems which assist users to accomplish their goals.

Cannot find the paper you are looking for? You can Submit a new open access paper.