1 code implementation • COLING 2022 • Guobiao Zhang, Wenpeng Lu, Xueping Peng, Shoujin Wang, Baoshuo Kan, Rui Yu
Word sense disambiguation (WSD), identifying the most suitable meaning of ambiguous words in the given contexts according to a predefined sense inventory, is one of the most classical and challenging tasks in natural language processing.
no code implementations • 1 Apr 2025 • Hongru Ma, Yanjie Liang, Jiasheng Si, Weiyu Zhang, Hongjiao Guan, Chaoqun Zheng, Bing Xu, Wenpeng Lu
Large language models (LLMs) have revolutionized code generation, significantly enhancing developer productivity.
1 code implementation • 23 Mar 2025 • Youhui Zuo, Sibo Wei, Chen Zhang, Zhuorui Liu, Wenpeng Lu, Dawei Song
With the advancements in long-context inference capabilities of large language models (LLMs), the KV cache has become one of the foundational components.
1 code implementation • 17 Nov 2024 • Sibo Wei, Xueping Peng, Yi-Fei Wang, Jiasheng Si, Weiyu Zhang, Wenpeng Lu, Xiaoming Wu, Yinglong Wang
The rise of large language models (LLMs) has driven significant progress in medical applications, including traditional Chinese medicine (TCM).
no code implementations • 2 Nov 2024 • Dongxu Liu, Bing Xu, Yinzhuo Chen, Bufan Xu, Wenpeng Lu, Muyun Yang, Tiejun Zhao
Reinforcement Learning from Human Feedback (RLHF) has been proven to be an effective method for preference alignment of large language models (LLMs) and is widely used in the post-training process of LLMs.
1 code implementation • 6 Oct 2024 • Yongheng Zhang, Qiguang Chen, Jingxuan Zhou, Peng Wang, Jiasheng Si, Jin Wang, Wenpeng Lu, Libo Qin
To address these challenges, we propose Wrong-of-Thought (WoT), which includes two core modules: (1) Multi-Perspective Verification: A multi-perspective verification method for accurately refining the reasoning process and result, and (2) Wrong Information Utilization: Utilizing wrong information to alert LLMs and reduce the probability of LLMs making same mistakes.
1 code implementation • 20 Aug 2024 • Jiasheng Si, Yibo Zhao, Yingjie Zhu, Haiyang Zhu, Wenpeng Lu, Deyu Zhou
In this paper, we introduce CheckWhy, a challenging dataset tailored to a novel causal fact verification task: checking the truthfulness of the causal relation within claims through rigorous reasoning steps.
no code implementations • 15 Jun 2024 • Libo Qin, Fuxuan Wei, Qiguang Chen, Jingxuan Zhou, Shijue Huang, Jiasheng Si, Wenpeng Lu, Wanxiang Che
To solve this problem, we present the pioneering work of Cross-task Interactive Prompting (CroPrompt) for SLU, which enables the model to interactively leverage the information exchange across the correlated tasks in SLU.
1 code implementation • 7 Mar 2024 • Hui Huang, Yingqi Qu, Jing Liu, Muyun Yang, Bing Xu, Tiejun Zhao, Wenpeng Lu
The proliferation of open-source Large Language Models (LLMs) underscores the pressing need for evaluation methods.
no code implementations • 5 Dec 2023 • Zhufeng Shao, Shoujin Wang, Qian Zhang, Wenpeng Lu, Zhao Li, Xueping Peng
This methodological rigor establishes a cohesive framework for the impartial evaluation of diverse NBR approaches.
no code implementations • ICCV 2023 • Baoshuo Kan, Teng Wang, Wenpeng Lu, XianTong Zhen, Weili Guan, Feng Zheng
Pre-trained vision-language models, e. g., CLIP, working with manually designed prompts have demonstrated great capacity of transfer learning.
1 code implementation • 15 Apr 2023 • Sibo Wei, Wenpeng Lu, Xueping Peng, Shoujin Wang, Yi-Fei Wang, Weiyu Zhang
Although existing works have attempted to utilize Seq2Seq, reinforcement learning, or contrastive learning to solve the problem, two challenges remain: how to correctly capture question focus to model its semantic intention, and how to obtain reliable datasets to fairly evaluate performance.
1 code implementation • 5 Nov 2022 • Rui Yu, Yifeng Li, Wenpeng Lu, Longbing Cao
In natural language processing (NLP), the context of a word or sentence plays an essential role.
no code implementations • 7 Sep 2022 • Zhufeng Shao, Shoujin Wang, Qian Zhang, Wenpeng Lu, Zhao Li, Xueping Peng
Different studies often evaluate NBR approaches on different datasets, under different experimental settings, making it hard to fairly and effectively compare the performance of different NBR approaches.
no code implementations • 15 Jul 2022 • Rongyao Wang, Wenpeng Lu
In MINS, a news encoder based on self-attention is devised on learn an informative embedding for each piece of news, and then a novel parallel interest network is devised to extract the potential multiple interests embedded in the news sequence in preparation for the subsequent next-news recommendations.
no code implementations • 29 Jan 2022 • Qian Zhang, Wenpeng Lu
Based on a strong assumption of adjacent dependency, any two adjacent items in a session are necessarily dependent in most GNN-based SBRs.
no code implementations • 12 Oct 2021 • Rongyao Wang, Wenpeng Lu, Shoujin Wang, Xueping Peng, Hao Wu, Qian Zhang
News recommender systems are essential for helping users to efficiently and effectively find out those interesting news from a large amount of news.
1 code implementation • COLING 2020 • Xu Zhang, Yifeng Li, Wenpeng Lu, Ping Jian, Guoqiang Zhang
Sentence intention matching is vital for natural language understanding.
no code implementations • 30 May 2020 • Shoujin Wang, Longbing Cao, Liang Hu, Shlomo Berkovsky, Xiaoshui Huang, Lin Xiao, Wenpeng Lu
Most existing TBRSs recommend next item by only modeling the intra-transaction dependency within the current transaction while ignoring inter-transaction dependency with recent transactions that may also affect the next item.
no code implementations • SEMEVAL 2017 • Fanqing Meng, Wenpeng Lu, Yuteng Zhang, Jinyong Cheng, Yuehan Du, Shuwang Han
This paper reports the details of our submissions in the task 1 of SemEval 2017.
no code implementations • SEMEVAL 2017 • Fanqing Meng, Wenpeng Lu, Yuteng Zhang, Ping Jian, Shumin Shi, He-Yan Huang
The techniques of our runs mainly make use of the word embeddings and the knowledge-based method.