no code implementations • 24 May 2024 • Hongjin Qian, Zheng Liu, Peitian Zhang, Kelong Mao, Yujia Zhou, Xu Chen, Zhicheng Dou
The learning and deployment of long-LLMs remains a challenging problem despite recent progresses.
1 code implementation • 30 Apr 2024 • Peitian Zhang, Ninglu Shao, Zheng Liu, Shitao Xiao, Hongjin Qian, Qiwei Ye, Zhicheng Dou
We extend the context length of Llama-3-8B-Instruct from 8K to 80K via QLoRA fine-tuning.
no code implementations • 15 Feb 2024 • Hongjin Qian, Zheng Liu, Kelong Mao, Yujia Zhou, Zhicheng Dou
These strategies not only improve the efficiency of the retrieval process but also ensure that the fidelity of the generated grounding text evidence is maintained.
no code implementations • 30 Aug 2023 • Hongjin Qian, Zhicheng Dou, Jiejun Tan, Haonan Chen, Haoqi Gu, Ruofei Lai, Xinyu Zhang, Zhao Cao, Ji-Rong Wen
Previous methods use external knowledge as references for text generation to enhance factuality but often struggle with the knowledge mix-up(e. g., entity mismatch) of irrelevant references.
2 code implementations • 12 Mar 2023 • Kelong Mao, Zhicheng Dou, Fengran Mo, Jiewen Hou, Haonan Chen, Hongjin Qian
Precisely understanding users' contextual search intent has been an important challenge for conversational search.
no code implementations • NAACL 2022 • Hanxun Zhong, Zhicheng Dou, Yutao Zhu, Hongjin Qian, Ji-Rong Wen
Existing personalized dialogue systems have tried to extract user profiles from dialogue history to guide personalized response generation.
1 code implementation • 18 Aug 2021 • Hongjin Qian, Zhicheng Dou, Yutao Zhu, Yueyuan Ma, Ji-Rong Wen
To learn a user's personalized language style, we elaborately build language models from shallow to deep using the user's historical responses; To model a user's personalized preferences, we explore the conditional relations underneath each post-response pair of the user.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Yafei Liu, Hongjin Qian, Hengpeng Xu, JinMao Wei
However, along with the proactive manner introduced into a dialogue agent, an issue arises that, with too many knowledge facts to express, the agent starts to talks endlessly, and even completely ignores what the other expresses in dialogue sometimes, which greatly harms the interest of the other chatter to continue the conversation.
2 code implementations • 28 Sep 2020 • Hongjin Qian, Xiaohe Li, Hanxun Zhong, Yu Guo, Yueyuan Ma, Yutao Zhu, Zhanliang Liu, Zhicheng Dou, Ji-Rong Wen
This enables the development of personalized dialogue models that directly learn implicit user personality from the user's dialogue history.