Search Results for author: Junqing He

Found 10 papers, 3 papers with code

MADial-Bench: Towards Real-world Evaluation of Memory-Augmented Dialogue Generation

no code implementations23 Sep 2024 Junqing He, Liang Zhu, Rui Wang, Xi Wang, Reza Haffari, Jiaxing Zhang

Long-term memory is important for chatbots and dialogue systems (DS) to create consistent and human-like conversations, evidenced by numerous developed memory-augmented DS (MADS).

Dialogue Generation Retrieval

Fostering Natural Conversation in Large Language Models with NICO: a Natural Interactive COnversation dataset

no code implementations18 Aug 2024 Renliang Sun, Mengyuan Liu, Shiping Yang, Rui Wang, Junqing He, Jiaxing Zhang

Benefiting from diverse instruction datasets, contemporary Large Language Models (LLMs) perform effectively as AI assistants in collaborating with humans.

Sentence

FSM: A Finite State Machine Based Zero-Shot Prompting Paradigm for Multi-Hop Question Answering

no code implementations3 Jul 2024 Xiaochen Wang, Junqing He, Zhe Yang, Yiru Wang, Xiangdi Meng, Kunhao Pan, Zhifang Sui

Large Language Models (LLMs) with chain-of-thought (COT) prompting have demonstrated impressive abilities on simple nature language inference tasks.

Hallucination Multi-hop Question Answering +1

Never Lost in the Middle: Mastering Long-Context Question Answering with Position-Agnostic Decompositional Training

2 code implementations15 Nov 2023 Junqing He, Kunhao Pan, Xiaoqun Dong, Zhuoyang Song, Yibo Liu, Qianguo Sun, Yuxin Liang, Hao Wang, Enming Zhang, Jiaxing Zhang

While large language models (LLMs) are equipped with longer text input capabilities than before, they are struggling to seek correct information in long contexts.

Passage Retrieval Position +2

Ziya2: Data-centric Learning is All LLMs Need

no code implementations6 Nov 2023 Ruyi Gan, Ziwei Wu, Renliang Sun, Junyu Lu, XiaoJun Wu, Dixiang Zhang, Kunhao Pan, Junqing He, Yuanhe Tian, Ping Yang, Qi Yang, Hao Wang, Jiaxing Zhang, Yan Song

Although many such issues are addressed along the line of research on LLMs, an important yet practical limitation is that many studies overly pursue enlarging model sizes without comprehensively analyzing and optimizing the use of pre-training data in their learning process, as well as appropriate organization and leveraging of such data in training LLMs under cost-effective settings.

All

Cannot find the paper you are looking for? You can Submit a new open access paper.