no code implementations • 8 Jul 2024 • Jinliang Lu, Ziliang Pang, Min Xiao, Yaochen Zhu, Rui Xia, Jiajun Zhang
The remarkable success of Large Language Models (LLMs) has ushered natural language processing (NLP) research into a new era.
no code implementations • 4 Jun 2024 • Jinliang Lu, Chen Wang, Jiajun Zhang
Large language models (LLMs) have shown impressive capabilities in adapting to various tasks when provided with task-specific instructions.
1 code implementation • 30 May 2024 • Chong Li, Wen Yang, Jiajun Zhang, Jinliang Lu, Shaonan Wang, Chengqing Zong
In addition, we find that models tuned on cross-lingual instruction following samples can follow the instruction in the output language without further tuning.
1 code implementation • 15 Apr 2024 • Yangyifan Xu, Jinliang Lu, Jiajun Zhang
Ensembling different large language models (LLMs) to unleash their complementary potential and harness their individual strengths is highly valuable.
1 code implementation • 2 Sep 2023 • Chen Wang, Minpeng Liao, Zhongqiang Huang, Jinliang Lu, Junhong Wu, Yuchen Liu, Chengqing Zong, Jiajun Zhang
One is a cascaded approach where outputs (tokens or states) of a separately trained speech recognition system are used as inputs for LLMs, which limits their potential in modeling alignment between speech and text.
1 code implementation • 18 Jan 2022 • Feihu Jin, Jinliang Lu, Jiajun Zhang, Chengqing Zong
Specifically, we suppose that each learnable prompt token has a different contribution to different instances, and we learn the contribution by calculating the relevance score between an instance and each prompt token.
no code implementations • 27 Dec 2021 • Yuan YAO, Qingxiu Dong, Jian Guan, Boxi Cao, Zhengyan Zhang, Chaojun Xiao, Xiaozhi Wang, Fanchao Qi, Junwei Bao, Jinran Nie, Zheni Zeng, Yuxian Gu, Kun Zhou, Xuancheng Huang, Wenhao Li, Shuhuai Ren, Jinliang Lu, Chengqiang Xu, Huadong Wang, Guoyang Zeng, Zile Zhou, Jiajun Zhang, Juanzi Li, Minlie Huang, Rui Yan, Xiaodong He, Xiaojun Wan, Xin Zhao, Xu sun, Yang Liu, Zhiyuan Liu, Xianpei Han, Erhong Yang, Zhifang Sui, Maosong Sun
We argue that for general-purpose language intelligence evaluation, the benchmark itself needs to be comprehensive and systematic.
1 code implementation • Findings (EMNLP) 2021 • Jinliang Lu, Jiajun Zhang
Back-translation (BT) has become one of the de facto components in unsupervised neural machine translation (UNMT), and it explicitly makes UNMT have translation ability.