no code implementations • EMNLP 2021 • Xinglin Lyu, Junhui Li, ZhengXian Gong, Min Zhang
In this paper we apply “one translation per discourse” in NMT, and aim to encourage lexical translation consistency for document-level NMT.
1 code implementation • 22 May 2024 • Xiang Geng, Ming Zhu, Jiahuan Li, Zhejian Lai, Wei Zou, Shuaijie She, Jiaxin Guo, Xiaofeng Zhao, Yinglu Li, Yuang Li, Chang Su, Yanqing Zhao, Xinglin Lyu, Min Zhang, Jiajun Chen, Hao Yang, ShuJian Huang
For the second issue, we propose a method comprising two synergistic components: low-rank adaptation for training to maintain the original LLM parameters, and recovery KD, which utilizes data generated by the chat LLM itself to recover the original knowledge from the frozen parameters.
no code implementations • 23 Feb 2024 • Xinglin Lyu, Junhui Li, Yanqing Zhao, Daimeng Wei, Shimin Tao, Hao Yang, Min Zhang
In this paper, we propose an alternative adaptation approach, named Decoding-enhanced Multi-phase Prompt Tuning (DeMPT), to make LLMs discriminately model and utilize the inter- and intra-sentence context and more effectively adapt LLMs to context-aware NMT.