no code implementations • WAT 2022 • Yilun Liu, Zhen Zhang, Shimin Tao, Junhui Li, Hao Yang
In this paper we describe our submission to the shared tasks of the 9th Workshop on Asian Translation (WAT 2022) on NICT–SAP under the team name ”HwTscSU”.
no code implementations • 21 Mar 2024 • Haofei Zhao, Yilun Liu, Shimin Tao, Weibin Meng, Yimeng Chen, Xiang Geng, Chang Su, Min Zhang, Hao Yang
Machine Translation Quality Estimation (MTQE) is the task of estimating the quality of machine-translated text in real time without the need for reference translations, which is of great importance for the development of MT.
1 code implementation • 28 Feb 2024 • Yuan Ge, Yilun Liu, Chi Hu, Weibin Meng, Shimin Tao, Xiaofeng Zhao, Hongxia Ma, Li Zhang, Hao Yang, Tong Xiao
The second step involves preserving dataset diversity through a clustering process. In our experiment, CaR selected a subset containing only 1. 96% of Alpaca's IT data, yet the underlying AlpaCaR model trained on this subset outperforms Alpaca by an average of 32. 1% in GPT-4 evaluations.
1 code implementation • 22 Dec 2023 • Yilun Liu, Ruihong Qiu, Yanran Tang, Hongzhi Yin, Zi Huang
Our prior work, CaT is a replay-based framework with a balanced continual learning procedure, which designs a small yet effective memory bank for replaying data by condensing incoming graphs.
1 code implementation • 18 Dec 2023 • Yanran Tang, Ruihong Qiu, Yilun Liu, Xue Li, Zi Huang
Previous neural legal case retrieval models mostly encode the unstructured raw text of case into a case representation, which causes the lack of important legal structural information in a case and leads to poor case representation; (2) Lengthy legal text limitation.
1 code implementation • 27 Nov 2023 • Yilun Liu, Difan Jiao, Ashton Anderson
Among the many tasks that Large Language Models (LLMs) have revolutionized is text classification.
2 code implementations • 22 Nov 2023 • Yilun Liu, Shimin Tao, Xiaofeng Zhao, Ming Zhu, Wenbing Ma, Junhao Zhu, Chang Su, Yutai Hou, Miao Zhang, Min Zhang, Hongxia Ma, Li Zhang, Hao Yang, Yanfei Jiang
Instruction tuning is crucial for enabling Language Learning Models (LLMs) in responding to human instructions.
3 code implementations • 18 Sep 2023 • Yilun Liu, Ruihong Qiu, Zi Huang
Recent replay-based methods intend to solve this problem by updating the model using both (1) the entire new-coming data and (2) a sampling-based memory bank that stores replayed graphs to approximate the distribution of historical data.
1 code implementation • 15 Aug 2023 • Yilun Liu, Shimin Tao, Weibin Meng, Jingyu Wang, Wenbing Ma, Yanqing Zhao, Yuhang Chen, Hao Yang, Yanfei Jiang, Xun Chen
LogPrompt employs large language models (LLMs) to perform online log analysis tasks via a suite of advanced prompt strategies tailored for log tasks, which enhances LLMs' performance by up to 380. 7% compared with simple prompts.