1 code implementation • 16 Dec 2024 • Guangsheng Bao, Yanbin Zhao, Juncai He, Yue Zhang
Advanced large language models (LLMs) can generate text almost indistinguishable from human-written text, highlighting the importance of LLM-generated text detection.
no code implementations • 28 Oct 2024 • Yixuan Weng, Minjun Zhu, Guangsheng Bao, Hongbo Zhang, Jindong Wang, Yue Zhang, Linyi Yang
In research, the papers generated by the CycleResearcher model achieved a score of 5. 36 in simulated peer reviews, surpassing the preprint level of 5. 24 from human experts and approaching the accepted paper level of 5. 69.
1 code implementation • 18 Mar 2024 • Cunxiang Wang, Ruoxi Ning, Boqi Pan, Tonghui Wu, Qipeng Guo, Cheng Deng, Guangsheng Bao, Xiangkun Hu, Zheng Zhang, Qian Wang, Yue Zhang
The rapid advancement of Large Language Models (LLMs) has introduced a new frontier in natural language processing, particularly in understanding and processing long-context information.
1 code implementation • 25 Feb 2024 • Guangsheng Bao, Hongbo Zhang, Cunxiang Wang, Linyi Yang, Yue Zhang
Chain-of-thought emerges as a promising technique for eliciting reasoning capabilities from Large Language Models (LLMs).
1 code implementation • 26 Dec 2023 • Linyi Yang, Shuibai Zhang, Zhuohao Yu, Guangsheng Bao, Yidong Wang, Jindong Wang, Ruochen Xu, Wei Ye, Xing Xie, Weizhu Chen, Yue Zhang
Large Language Models (LLMs) exhibit emerging in-context learning abilities through prompt engineering.
2 code implementations • 8 Oct 2023 • Guangsheng Bao, Yanbin Zhao, Zhiyang Teng, Linyi Yang, Yue Zhang
Large language models (LLMs) have shown the ability to produce fluent and cogent content, presenting both productivity opportunities and societal risks.
1 code implementation • 22 May 2023 • Guangsheng Bao, Zhiyang Teng, Hao Zhou, Jianhao Yan, Yue Zhang
However, current NAT models still have a significant performance gap compared to their AT counterparts.
1 code implementation • 8 May 2023 • Guangsheng Bao, Zhiyang Teng, Yue Zhang
Document-level machine translation faces the challenge of data sparsity due to its long input length and a small amount of training data, increasing the risk of learning spurious patterns.
no code implementations • 8 May 2023 • Guangsheng Bao, Zhiyang Teng, Yue Zhang
Sequence-to-sequence (seq2seq) models have been widely used for natural language processing, computer vision, and other deep learning tasks.
1 code implementation • 7 Apr 2023 • Guangsheng Bao, Zebin Ou, Yue Zhang
To address this issue, we propose an adaptive model, GEMINI, that integrates a rewriter and a generator to mimic the sentence rewriting and abstracting techniques, respectively.
1 code implementation • 13 Jul 2022 • Guangsheng Bao, Yue Zhang
The rewriting method for text summarization combines extractive and abstractive approaches, improving the conciseness and readability of extractive summaries using an abstractive model.
1 code implementation • ACL 2021 • Guangsheng Bao, Yue Zhang, Zhiyang Teng, Boxing Chen, Weihua Luo
However, study shows that when we further enlarge the translation unit to a whole document, supervised training of Transformer can fail.
2 code implementations • 31 Jan 2021 • Guangsheng Bao, Yue Zhang
Extractive summarization suffers from irrelevance, redundancy and incoherence.
1 code implementation • EMNLP 2020 • Dandan Huang, Leyang Cui, Sen yang, Guangsheng Bao, Kun Wang, Jun Xie, Yue Zhang
Deep learning has led to significant improvement in text summarization with various methods investigated and improved ROUGE scores reported over the years.