Search Results for author: Yafu Li

Found 16 papers, 14 papers with code

Prompt-Driven Neural Machine Translation

1 code implementation Findings (ACL) 2022 Yafu Li, Yongjing Yin, Jing Li, Yue Zhang

Neural machine translation (NMT) has obtained significant performance improvement over the recent years.

Machine Translation NMT +1

LexMatcher: Dictionary-centric Data Collection for LLM-based Machine Translation

1 code implementation3 Jun 2024 Yongjing Yin, Jiali Zeng, Yafu Li, Fandong Meng, Yue Zhang

The fine-tuning of open-source large language models (LLMs) for machine translation has recently received considerable attention, marking a shift towards data-centric research from traditional neural machine translation.

Data Augmentation Machine Translation +2

Spotting AI's Touch: Identifying LLM-Paraphrased Spans in Text

1 code implementation21 May 2024 Yafu Li, Zhilin Wang, Leyang Cui, Wei Bi, Shuming Shi, Yue Zhang

To this end, we propose a novel detection framework, paraphrased text span detection (PTD), aiming to identify paraphrased text spans within a text.

Diversity Text Detection

What Have We Achieved on Non-autoregressive Translation?

1 code implementation21 May 2024 Yafu Li, Huajian Zhang, Jianhao Yan, Yongjing Yin, Yue Zhang

Recent advances have made non-autoregressive (NAT) translation comparable to autoregressive methods (AT).

Translation

Potential and Challenges of Model Editing for Social Debiasing

no code implementations21 Feb 2024 Jianhao Yan, Futing Wang, Yafu Li, Yue Zhang

Large language models (LLMs) trained on vast corpora suffer from inevitable stereotype biases.

Model Editing

Understanding In-Context Learning from Repetitions

1 code implementation30 Sep 2023 Jianhao Yan, Jin Xu, Chiyu Song, Chenming Wu, Yafu Li, Yue Zhang

This paper explores the elusive mechanism underpinning in-context learning in Large Language Models (LLMs).

In-Context Learning Text Generation

Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models

1 code implementation3 Sep 2023 Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi

While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge.

Hallucination World Knowledge

An Empirical Study of Catastrophic Forgetting in Large Language Models During Continual Fine-tuning

1 code implementation17 Aug 2023 Yun Luo, Zhen Yang, Fandong Meng, Yafu Li, Jie zhou, Yue Zhang

Catastrophic forgetting (CF) is a phenomenon that occurs in machine learning when a model forgets previously learned information while acquiring new knowledge.

Decoder Reading Comprehension

Revisiting Cross-Lingual Summarization: A Corpus-based Study and A New Benchmark with Improved Annotation

1 code implementation8 Jul 2023 Yulong Chen, Huajian Zhang, Yijie Zhou, Xuefeng Bai, Yueguan Wang, Ming Zhong, Jianhao Yan, Yafu Li, Judy Li, Michael Zhu, Yue Zhang

Additionally, based on the same intuition, we propose a 2-Step method, which takes both conversation and summary as input to simulate human annotation process.

MAGE: Machine-generated Text Detection in the Wild

2 code implementations22 May 2023 Yafu Li, Qintong Li, Leyang Cui, Wei Bi, Zhilin Wang, Longyue Wang, Linyi Yang, Shuming Shi, Yue Zhang

In practical scenarios, however, the detector faces texts from various domains or LLMs without knowing their sources.

Face Swapping Story Generation +1

GLUE-X: Evaluating Natural Language Understanding Models from an Out-of-distribution Generalization Perspective

1 code implementation15 Nov 2022 Linyi Yang, Shuibai Zhang, Libo Qin, Yafu Li, Yidong Wang, Hanmeng Liu, Jindong Wang, Xing Xie, Yue Zhang

Pre-trained language models (PLMs) are known to improve the generalization performance of natural language understanding models by leveraging large amounts of data during the pre-training phase.

Natural Language Understanding Out-of-Distribution Generalization

Multi-Granularity Optimization for Non-Autoregressive Translation

1 code implementation20 Oct 2022 Yafu Li, Leyang Cui, Yongjing Yin, Yue Zhang

Despite low latency, non-autoregressive machine translation (NAT) suffers severe performance deterioration due to the naive independence assumption.

Machine Translation Translation

On Compositional Generalization of Neural Machine Translation

1 code implementation ACL 2021 Yafu Li, Yongjing Yin, Yulong Chen, Yue Zhang

Modern neural machine translation (NMT) models have achieved competitive performance in standard benchmarks such as WMT.

Domain Generalization Machine Translation +3

Cannot find the paper you are looking for? You can Submit a new open access paper.