1 code implementation • ACL 2022 • Xunjian Yin, Xiaojun Wan
With the rapid development of deep learning, Seq2Seq paradigm has become prevalent for end-to-end data-to-text generation, and the BLEU scores have been increasing in recent years.
1 code implementation • 10 Oct 2024 • Sitao Cheng, Liangming Pan, Xunjian Yin, Xinyi Wang, William Yang Wang
To support this investigation, we introduce ECHOQA, a benchmark spanning scientific, factual, and commonsense knowledge.
2 code implementations • 6 Oct 2024 • Xunjian Yin, Xinyi Wang, Liangming Pan, Xiaojun Wan, William Yang Wang
The rapid advancement of large language models (LLMs) has significantly enhanced the capabilities of AI-driven agents across various tasks.
1 code implementation • 26 Jun 2024 • Xinyu Hu, Li Lin, Mingqi Gao, Xunjian Yin, Xiaojun Wan
The evaluation of natural language generation (NLG) tasks is a significant and longstanding research area.
no code implementations • 19 Jun 2024 • Junzhe Zhang, Huixuan Zhang, Xunjian Yin, Baizhou Huang, Xu Zhang, Xinyu Hu, Xiaojun Wan
Our benchmark facilitates independent correction of misreading and misrecognition errors by editing the corresponding knowledge component.
no code implementations • 13 Jun 2024 • Xu Zhang, Xunjian Yin, Xiaojun Wan
While substantial advancements have been made in developing large language models (LLMs), achieving control over their behavior can be difficult.
no code implementations • 29 Feb 2024 • Junzhe Zhang, Huixuan Zhang, Xunjian Yin, Xiaojun Wan
News image captioning requires model to generate an informative caption rich in entities, with the news image and the associated news article.
1 code implementation • 18 Feb 2024 • Xunjian Yin, Xu Zhang, Jie Ruan, Xiaojun Wan
In recent years, substantial advancements have been made in the development of large language models, achieving remarkable performance across diverse tasks.
1 code implementation • 9 Dec 2023 • Xunjian Yin, Jin Jiang, Liming Yang, Xiaojun Wan
The imperative task of revising or updating the knowledge stored within large language models arises from two distinct sources: intrinsic errors inherent in the model which should be corrected and outdated knowledge due to external shifts in the real world which should be updated.
1 code implementation • 23 Oct 2023 • Xunjian Yin, Baizhou Huang, Xiaojun Wan
With the rapid development of NLP, large-scale language models (LLMs) excel in various tasks across multiple domains now.
no code implementations • 25 Jul 2023 • Xunjian Yin, Xiaojun Wan
With the development of pre-trained models and the incorporation of phonetic and graphic information, neural models have achieved high scores in Chinese Spelling Check (CSC).
1 code implementation • 5 Apr 2023 • Mingqi Gao, Jie Ruan, Renliang Sun, Xunjian Yin, Shiping Yang, Xiaojun Wan
Evaluating text summarization is a challenging problem, and existing evaluation metrics are far from satisfactory.
1 code implementation • 15 Nov 2022 • Xunjian Yin, Xinyu Hu, Jin Jiang, Xiaojun Wan
Chinese Spelling Check (CSC) aims to detect and correct error tokens in Chinese contexts, which has a wide range of applications.