Search Results for author: Yilun Liu

Found 9 papers, 7 papers with code

HwTscSU’s Submissions on WAT 2022 Shared Task

no code implementations WAT 2022 Yilun Liu, Zhen Zhang, Shimin Tao, Junhui Li, Hao Yang

In this paper we describe our submission to the shared tasks of the 9th Workshop on Asian Translation (WAT 2022) on NICT–SAP under the team name ”HwTscSU”.

Domain Adaptation NMT +1

From Handcrafted Features to LLMs: A Brief Survey for Machine Translation Quality Estimation

no code implementations21 Mar 2024 Haofei Zhao, Yilun Liu, Shimin Tao, Weibin Meng, Yimeng Chen, Xiang Geng, Chang Su, Min Zhang, Hao Yang

Machine Translation Quality Estimation (MTQE) is the task of estimating the quality of machine-translated text in real time without the need for reference translations, which is of great importance for the development of MT.

Machine Translation Sentence

Clustering and Ranking: Diversity-preserved Instruction Selection through Expert-aligned Quality Estimation

1 code implementation28 Feb 2024 Yuan Ge, Yilun Liu, Chi Hu, Weibin Meng, Shimin Tao, Xiaofeng Zhao, Hongxia Ma, Li Zhang, Hao Yang, Tong Xiao

The second step involves preserving dataset diversity through a clustering process. In our experiment, CaR selected a subset containing only 1. 96% of Alpaca's IT data, yet the underlying AlpaCaR model trained on this subset outperforms Alpaca by an average of 32. 1% in GPT-4 evaluations.

Clustering

PUMA: Efficient Continual Graph Learning with Graph Condensation

1 code implementation22 Dec 2023 Yilun Liu, Ruihong Qiu, Yanran Tang, Hongzhi Yin, Zi Huang

Our prior work, CaT is a replay-based framework with a balanced continual learning procedure, which designs a small yet effective memory bank for replaying data by condensing incoming graphs.

Continual Learning Graph Learning +1

CaseGNN: Graph Neural Networks for Legal Case Retrieval with Text-Attributed Graphs

1 code implementation18 Dec 2023 Yanran Tang, Ruihong Qiu, Yilun Liu, Xue Li, Zi Huang

Previous neural legal case retrieval models mostly encode the unstructured raw text of case into a case representation, which causes the lack of important legal structural information in a case and leads to poor case representation; (2) Lengthy legal text limitation.

Graph Attention Information Retrieval +1

CaT: Balanced Continual Graph Learning with Graph Condensation

3 code implementations18 Sep 2023 Yilun Liu, Ruihong Qiu, Zi Huang

Recent replay-based methods intend to solve this problem by updating the model using both (1) the entire new-coming data and (2) a sampling-based memory bank that stores replayed graphs to approximate the distribution of historical data.

Continual Learning Graph Learning

Interpretable Online Log Analysis Using Large Language Models with Prompt Strategies

1 code implementation15 Aug 2023 Yilun Liu, Shimin Tao, Weibin Meng, Jingyu Wang, Wenbing Ma, Yanqing Zhao, Yuhang Chen, Hao Yang, Yanfei Jiang, Xun Chen

LogPrompt employs large language models (LLMs) to perform online log analysis tasks via a suite of advanced prompt strategies tailored for log tasks, which enhances LLMs' performance by up to 380. 7% compared with simple prompts.

Anomaly Detection Log Parsing +1

Cannot find the paper you are looking for? You can Submit a new open access paper.