Search Results for author: Wanli Yang

Found 3 papers, 1 papers with code

The Butterfly Effect of Model Editing: Few Edits Can Trigger Large Language Models Collapse

no code implementations15 Feb 2024 Wanli Yang, Fei Sun, Xinyu Ma, Xun Liu, Dawei Yin, Xueqi Cheng

In this work, we reveal a critical phenomenon: even a single edit can trigger model collapse, manifesting as significant performance degradation in various benchmark tasks.

Benchmarking Model Editing

Blinded by Generated Contexts: How Language Models Merge Generated and Retrieved Contexts for Open-Domain QA?

no code implementations22 Jan 2024 Hexiang Tan, Fei Sun, Wanli Yang, Yuanzhuo Wang, Qi Cao, Xueqi Cheng

While auxiliary information has become a key to enhancing Large Language Models (LLMs), relatively little is known about how LLMs merge these contexts, specifically contexts generated by LLMs and those retrieved from external sources.

Limited Data Rolling Bearing Fault Diagnosis with Few-shot Learning

1 code implementation IEEE Access 2019 Ansi Zhang, Shaobo Li, Yuxin Cui, Wanli Yang, Rongzhi Dong and Jianjun Hu

In this study, we propose a deep neural network based few-shot learning approach for rolling bearing fault diagnosis with limited data.

Few-Shot Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.