1 code implementation • EMNLP 2021 • Zichu Fei, Qi Zhang, Yaqian Zhou
However, (1) they ignore the rich structure information that is hidden in the previously generated text.
no code implementations • 11 Nov 2024 • Mianqiu Huang, Xiaoran Liu, Shaojun Zhou, Mozhi Zhang, Chenkun Tan, Pengyu Wang, Qipeng Guo, Zhe Xu, Linyang Li, Zhikai Lei, Linlin Li, Qun Liu, Yaqian Zhou, Xipeng Qiu, Xuanjing Huang
With the development of large language models (LLMs), the sequence length of these models continues to increase, drawing significant attention to long-context language models.
1 code implementation • 18 Oct 2024 • Mozhi Zhang, Pengyu Wang, Chenkun Tan, Mianqiu Huang, Dong Zhang, Yaqian Zhou, Xipeng Qiu
Large Language Models (LLMs) acquire extensive knowledge and remarkable abilities from extensive text corpora, making them powerful tools for various applications.
no code implementations • 9 Oct 2024 • Xin Zhang, Xiang Lyu, Zhihao Du, Qian Chen, Dong Zhang, Hangrui Hu, Chaohong Tan, Tianyu Zhao, Yuxuan Wang, Bin Zhang, Heng Lu, Yaqian Zhou, Xipeng Qiu
Current methods of building LLMs with voice interaction capabilities rely heavily on explicit text autoregressive generation before or during speech response generation to maintain content quality, which unfortunately brings computational overhead and increases latency in multi-turn interactions.
1 code implementation • 8 Apr 2024 • Dong Zhang, Zhaowei Li, ShiMin Li, Xin Zhang, Pengyu Wang, Yaqian Zhou, Xipeng Qiu
However, the integration of human feedback to align speech outputs to human preferences is often neglected.
no code implementations • 3 Apr 2024 • Mozhi Zhang, Mianqiu Huang, Rundong Shi, Linsen Guo, Chong Peng, Peng Yan, Yaqian Zhou, Xipeng Qiu
Large language models optimized with techniques like RLHF have achieved good alignment in being helpful and harmless.
1 code implementation • 24 Jan 2024 • Dong Zhang, Xin Zhang, Jun Zhan, ShiMin Li, Yaqian Zhou, Xipeng Qiu
It comprises an autoregressive model based on LLM for semantic information modeling and a non-autoregressive model employing flow matching for perceptual information modeling.
1 code implementation • 8 Jan 2024 • Dong Zhang, Zhaowei Li, Pengyu Wang, Xin Zhang, Yaqian Zhou, Xipeng Qiu
In this paper, we propose SpeechAgents, a multi-modal LLM based multi-agent system designed for simulating human communication.
3 code implementations • 31 Aug 2023 • Xin Zhang, Dong Zhang, ShiMin Li, Yaqian Zhou, Xipeng Qiu
Therefore, we propose SpeechTokenizer, a unified speech tokenizer for speech large language models.
1 code implementation • 20 May 2023 • Mozhi Zhang, Hang Yan, Yaqian Zhou, Xipeng Qiu
We use prompts that contains entity category information to construct label prototypes, which enables our model to fine-tune with only the support set.
1 code implementation • 19 May 2023 • Dong Zhang, Rong Ye, Tom Ko, Mingxuan Wang, Yaqian Zhou
The key point is to bridge the modality gap between speech and text so that useful MT techniques can be applied to ST.
1 code implementation • 18 May 2023 • Dong Zhang, ShiMin Li, Xin Zhang, Jun Zhan, Pengyu Wang, Yaqian Zhou, Xipeng Qiu
Multi-modal large language models are regarded as a crucial step towards Artificial General Intelligence (AGI) and have garnered significant interest with the emergence of ChatGPT.
1 code implementation • EMNLP 2021 • Jun Zhao, Tao Gui, Qi Zhang, Yaqian Zhou
The clustering-based unsupervised relation discovery method has gradually become one of the important methods of open relation extraction (OpenRE).
Ranked #1 on Relation Extraction on FewRel
1 code implementation • ACL 2021 • Ruotian Ma, Tao Gui, Linyang Li, Qi Zhang, Yaqian Zhou, Xuanjing Huang
In this work, we propose the use of negative training (NT), in which a model is trained using complementary labels regarding that ``the instance does not belong to these complementary labels".
1 code implementation • ACL 2021 • Tao Gui, Xiao Wang, Qi Zhang, Qin Liu, Yicheng Zou, Xin Zhou, Rui Zheng, Chong Zhang, Qinzhuo Wu, Jiacheng Ye, Zexiong Pang, Yongxin Zhang, Zhengyan Li, Ruotian Ma, Zichu Fei, Ruijian Cai, Jun Zhao, Xingwu Hu, Zhiheng Yan, Yiding Tan, Yuan Hu, Qiyuan Bian, Zhihua Liu, Bolin Zhu, Shan Qin, Xiaoyu Xing, Jinlan Fu, Yue Zhang, Minlong Peng, Xiaoqing Zheng, Yaqian Zhou, Zhongyu Wei, Xipeng Qiu, Xuanjing Huang
To guarantee user acceptability, all the text transformations are linguistically based, and we provide a human evaluation for each one.
no code implementations • EMNLP 2018 • Yucheng Wang, Zhongyu Wei, Yaqian Zhou, Xuanjing Huang
Automatic essay scoring (AES) is the task of assigning grades to essays without human interference.