no code implementations • 20 Apr 2025 • Yiting Ran, Xintao Wang, Tian Qiu, Jiaqing Liang, Yanghua Xiao, Deqing Yang
Recent advances in large language models (LLMs) have enabled social simulation through multi-agent systems.
no code implementations • 20 Mar 2025 • Ruihan Yang, Fanghua Ye, Jian Li, Siyu Yuan, Yikai Zhang, Zhaopeng Tu, Xiaolong Li, Deqing Yang
In this work, we introduce Critique-Guided Improvement (CGI), a novel two-player framework, comprising an actor model that explores an environment and a critic model that generates detailed nature language feedback.
1 code implementation • 10 Mar 2025 • Tianhe Lin, Jian Xie, Siyu Yuan, Deqing Yang
Test-time compute is emerging as a new paradigm for enhancing language models' complex multi-step reasoning capabilities, as demonstrated by the success of OpenAI's o1 and o3, as well as DeepSeek's R1.
1 code implementation • 20 Feb 2025 • Yuchen Shi, Siqi Cai, Zihan Xu, Yuei Qin, Gang Li, Hang Shao, Jiawei Chen, Deqing Yang, Ke Li, Xing Sun
Experiments on three datasets demonstrate that FlowAgent not only adheres to workflows but also effectively manages OOW queries, highlighting its dual strengths in compliance and flexibility.
no code implementations • 11 Dec 2024 • Guochao Jiang, Ziqin Luo, Chengwei Hu, Zepeng Ding, Deqing Yang
Many previous models of named entity recognition (NER) suffer from the problem of Out-of-Entity (OOE), i. e., the tokens in the entity mentions of the test samples have not appeared in the training samples, which hinders the achievement of satisfactory performance.
1 code implementation • 6 Nov 2024 • Jin Xiao, Bowei Zhang, Qianyu He, Jiaqing Liang, Feng Wei, Jinglei Chen, Zujie Liang, Deqing Yang, Yanghua Xiao
To improve the LLMs' quotation generation abilities, we construct a bilingual knowledge base that is broad in scope and rich in dimensions, containing up to 32, 022 quotes.
1 code implementation • 29 Oct 2024 • Jiahe Bai, Baojian Zhou, Deqing Yang, Yanghua Xiao
Standard iterative methods require accessing the whole graph per iteration, making them time-consuming for large-scale graphs.
no code implementations • 25 Oct 2024 • Xuetian Chen, Hangcheng Li, Jiaqing Liang, Sihang Jiang, Deqing Yang
Given the lack of high-quality training data for GUI-related tasks in existing work, this paper aims to enhance the GUI understanding and interacting capabilities of LVLMs through a data-driven approach.
1 code implementation • 19 Oct 2024 • Baojian Zhou, Yifan Sun, Reza Babanezhad Harikandeh, Xingzhi Guo, Deqing Yang, Yanghua Xiao
We propose to use the \textit{locally evolving set process}, a novel framework to characterize the algorithm locality, and demonstrate that many standard solvers can be effectively localized.
1 code implementation • 18 Oct 2024 • Ruihan Yang, Caiqi Zhang, Zhisong Zhang, Xinting Huang, Sen yang, Nigel Collier, Dong Yu, Deqing Yang
To tackle these challenges, we propose a refinement-based data collection framework and a two-stage training pipeline.
1 code implementation • 23 Sep 2024 • Nianqi Li, Siyu Yuan, Jiangjie Chen, Jiaqing Liang, Feng Wei, Zujie Liang, Deqing Yang, Yanghua Xiao
Historical analogies, which compare known past events with contemporary but unfamiliar events, are important abilities that help people make decisions and understand the world.
1 code implementation • 3 Sep 2024 • Yuchen Shi, Guochao Jiang, Tian Qiu, Deqing Yang
The relation extraction (RE) in complex scenarios faces challenges such as diverse relation types and ambiguous relations between entities within a single sentence, leading to the poor performance of pure "text-in, text-out" language models (LMs).
1 code implementation • 27 Jun 2024 • Yiting Ran, Xintao Wang, Rui Xu, Xinfeng Yuan, Jiaqing Liang, Deqing Yang, Yanghua Xiao
Role-playing agents (RPA) have been a popular application area for large language models (LLMs), attracting significant interest from both industry and academia. While existing RPAs well portray the characters' knowledge and tones, they face challenges in capturing their minds, especially for small role-playing language models (RPLMs).
1 code implementation • 20 Jun 2024 • Siyu Yuan, Kaitao Song, Jiangjie Chen, Xu Tan, Dongsheng Li, Deqing Yang
In this paper, we introduce EvoAgent, a generic method to automatically extend expert agents to multi-agent systems via the evolutionary algorithm, thereby improving the effectiveness of LLM-based agents in solving tasks.
no code implementations • 17 Jun 2024 • Zepeng Ding, Ruiyang Ke, Wenhao Huang, Guochao Jiang, Yanda Li, Deqing Yang, Jiaqing Liang
Existing research on large language models (LLMs) shows that they can solve information extraction tasks through multi-step planning.
1 code implementation • 17 Jun 2024 • Siyu Yuan, Cheng Jiayang, Lin Qiu, Deqing Yang
Analogical reasoning plays a critical role in human cognition, enabling us to understand new concepts by associating them with familiar ones.
no code implementations • 7 Jun 2024 • Ruihan Yang, Jiangjie Chen, Yikai Zhang, Siyu Yuan, Aili Chen, Kyle Richardson, Yanghua Xiao, Deqing Yang
Language agents powered by large language models (LLMs) are increasingly valuable as decision-making tools in domains such as gaming and programming.
no code implementations • 27 May 2024 • Dixuan Wang, Yanda Li, Junyuan Jiang, Zepeng Ding, Ziqin Luo, Guochao Jiang, Jiaqing Liang, Deqing Yang
Our empirical results reveal that our ADT is highly effective on challenging the tokenization of leading LLMs, including GPT-4o, Llama-3, Deepseek-R1 and so on, thus degrading these LLMs' capabilities.
no code implementations • 26 May 2024 • Ziqin Luo, Haixia Han, Haokun Zhao, Guochao Jiang, Chengyu Du, Tingyun Li, Jiaqing Liang, Deqing Yang, Yanghua Xiao
Existing Large Language Models (LLMs) generate text through unidirectional autoregressive decoding methods to respond to various user queries.
1 code implementation • 8 May 2024 • Guochao Jiang, Zepeng Ding, Yuchen Shi, Deqing Yang
To obtain optimal point entities for prompting LLMs, we also proposed a point entity selection method based on K-Means clustering.
1 code implementation • 21 Apr 2024 • Zhijun Xu, Siyu Yuan, Lingjie Chen, Deqing Yang
Puns play a vital role in academic research due to their distinct structure and clear definition, which aid in the comprehensive analysis of linguistic humor.
1 code implementation • 19 Apr 2024 • Xinfeng Yuan, Siyu Yuan, Yuhan Cui, Tianhe Lin, Xintao Wang, Rui Xu, Jiangjie Chen, Deqing Yang
The prerequisite for these RPAs lies in the capability of LLMs to understand characters from fictional works.
no code implementations • 15 Apr 2024 • Zepeng Ding, Wenhao Huang, Jiaqing Liang, Deqing Yang, Yanghua Xiao
The framework includes an evaluation model that can extract related entity pairs with high precision.
1 code implementation • 15 Apr 2024 • Yuchen Shi, Deqing Yang, Jingping Liu, Yanghua Xiao, ZongYu Wang, Huimin Xu
To achieve NTE, we devise a novel Syntax&Semantic-Enhanced Negation Extraction model, namely SSENE, which is built based on a generative pretrained language model (PLM) {of Encoder-Decoder architecture} with a multi-task learning framework.
1 code implementation • 14 Apr 2024 • Guochao Jiang, Ziqin Luo, Yuchen Shi, Dixuan Wang, Jiaqing Liang, Deqing Yang
In recent years, the fine-tuned generative models have been proven more powerful than the previous tagging-based or span-based models on named entity recognition (NER) task.
no code implementations • 4 Apr 2024 • Yanda Li, Dixuan Wang, Jiaqing Liang, Guochao Jiang, Qianyu He, Yanghua Xiao, Deqing Yang
Large Language Models (LLMs) have demonstrated good performance in many reasoning tasks, but they still struggle with some complicated reasoning tasks including logical reasoning.
1 code implementation • 20 Jan 2024 • Zhen Chen, Jingping Liu, Deqing Yang, Yanghua Xiao, Huimin Xu, ZongYu Wang, Rui Xie, Yunsen Xian
Open information extraction (OpenIE) aims to extract the schema-free triplets in the form of (\emph{subject}, \emph{predicate}, \emph{object}) from a given sentence.
1 code implementation • 11 Jan 2024 • Siyu Yuan, Kaitao Song, Jiangjie Chen, Xu Tan, Yongliang Shen, Ren Kan, Dongsheng Li, Deqing Yang
EasyTool purifies essential information from extensive tool documentation of different sources, and elaborates a unified interface (i. e., tool instruction) to offer standardized tool descriptions and functionalities for LLM-based agents.
no code implementations • 16 Jun 2023 • Jingsong Yang, Guanzhou Han, Deqing Yang, Jingping Liu, Yanghua Xiao, Xiang Xu, Baohua Wu, Shenghua Ni
In this paper, we propose a novel Multi-Modal Model for POI Tagging, namely M3PT, which achieves enhanced POI tagging through fusing the target POI's textual and visual features, and the precise matching between the multi-modal representations.
1 code implementation • 22 May 2023 • Siyu Yuan, Jiangjie Chen, Xuyang Ge, Yanghua Xiao, Deqing Yang
The vital role of analogical reasoning in human cognition allows us to grasp novel concepts by linking them with familiar ones through shared relational structures.
1 code implementation • 10 May 2023 • Siyu Yuan, Jiangjie Chen, Changzhi Sun, Jiaqing Liang, Yanghua Xiao, Deqing Yang
Analogical reasoning is a fundamental cognitive ability of humans.
1 code implementation • 9 May 2023 • Siyu Yuan, Jiangjie Chen, Ziquan Fu, Xuyang Ge, Soham Shah, Charles Robert Jankowski, Yanghua Xiao, Deqing Yang
In everyday life, humans often plan their actions by following step-by-step instructions in the form of goal-oriented scripts.
1 code implementation • 3 May 2023 • Siyu Yuan, Deqing Yang, Jinxi Liu, Shuyu Tian, Jiaqing Liang, Yanghua Xiao, Rui Xie
The prompt adopts the topic of the given entity from the existing knowledge in KGs to mitigate the spurious co-occurrence correlations between entities and biased concepts.
1 code implementation • 25 Nov 2022 • Shuoyao Zhai, Baichuan Liu, Deqing Yang, Yanghua Xiao
Furthermore, we propose two auxiliary losses corresponding to the two sub-tasks, to refine the representation learning in our model.
no code implementations • COLING 2022 • Chengwei Hu, Deqing Yang, Haoliang Jin, Zhen Chen, Yanghua Xiao
Continual relation extraction (CRE) aims to extract relations towards the continuous and iterative arrival of new data, of which the major challenge is the catastrophic forgetting of old tasks.
1 code implementation • 6 Oct 2022 • Siyu Yuan, Deqing Yang, Jiaqing Liang, Zhixu Li, Jinxi Liu, Jingyue Huang, Yanghua Xiao
To overcome these drawbacks, we propose a novel generative entity typing (GET) paradigm: given a text with an entity mention, the multiple types for the role that the entity plays in the text are generated with a pre-trained language model (PLM).
1 code implementation • 30 Aug 2022 • Siyu Yuan, Deqing Yang, Jiaqing Liang, Jilun Sun, Jingyue Huang, Kaiyan Cao, Yanghua Xiao, Rui Xie
In order to supply existing KGs with more fine-grained and new concepts, we propose a novel concept extraction framework, namely MRC-CE, to extract large-scale multi-granular concepts from the descriptive texts of entities.
1 code implementation • 27 Jul 2022 • Lyuxin Xue, Deqing Yang, Yanghua Xiao
Most sequential recommendation (SR) systems employing graph neural networks (GNNs) only model a user's interaction sequence as a flat graph without hierarchy, overlooking diverse factors in the user's preference.
1 code implementation • 27 Jul 2022 • Jingjie Yi, Deqing Yang, Siyu Yuan, Caiyan Cao, Zhiyao Zhang, Yanghua Xiao
The newly proposed ERC models have leveraged pre-trained language models (PLMs) with the paradigm of pre-training and fine-tuning to obtain good performance.
1 code implementation • NAACL 2022 • Jiaxin Yu, Deqing Yang, Shuyu Tian
Compared with traditional sentence-level relation extraction, document-level relation extraction is a more challenging task where an entity in a document may be mentioned multiple times and associated with multiple relations.
Ranked #37 on
Relation Extraction
on DocRED
1 code implementation • 7 Apr 2022 • Baichuan Liu, Deqing Yang, Yueyi Wang, Yuchen Shi
However, the user dependencies in a cascade sequence captured by sequential models are generally unidirectional and inconsistent with diffusion trees.
1 code implementation • ACL 2021 • Li Cui, Deqing Yang, Jiaxin Yu, Chengwei Hu, Jiayang Cheng, Jingjie Yi, Yanghua Xiao
As a typical task of continual learning, continual relation extraction (CRE) aims to extract relations between entities from texts, where the samples of different relations are delivered into the model continuously.
no code implementations • 7 Apr 2021 • Jiayang Cheng, Haiyun Jiang, Deqing Yang, Yanghua Xiao
However, few works have focused on how to validate and correct the results generated by the existing relation extraction models.
no code implementations • 9 Dec 2020 • Haiyun Jiang, Qiaoben Bao, Qiao Cheng, Deqing Yang, Li Wang, Yanghua Xiao
In recent years, many complex relation extraction tasks, i. e., the variants of simple binary relation extraction, are proposed to meet the complex applications in practice.
1 code implementation • 19 Jun 2020 • Junyang Jiang, Deqing Yang, Yanghua Xiao, Chenlu Shen
Most of existing embedding based recommendation models use embeddings (vectors) corresponding to a single fixed point in low-dimensional space, to represent users and items.
no code implementations • 18 Jun 2020 • Deqing Yang, Zengcun Song, Lvxin Xue, Yanghua Xiao
Deep neural networks (DNNs) have been widely employed in recommender systems including incorporating attention mechanism for performance improvement.
1 code implementation • 12 Jun 2020 • Wenjing Meng, Deqing Yang, Yanghua Xiao
These insights motivate us to propose a novel SR model MKM-SR in this paper, which incorporates user Micro-behaviors and item Knowledge into Multi-task learning for Session-based Recommendation.