1 code implementation • COLING 2022 • Bo Xu, Shizhou Huang, Ming Du, Hongya Wang, Hui Song, Chaofeng Sha, Yanghua Xiao
In this paper, we argue that different social media posts should consider different modalities for multimodal information extraction.
no code implementations • COLING 2022 • Xuantao Lu, Jingping Liu, Zhouhong Gu, Hanwen Tong, Chenhao Xie, Junyang Huang, Yanghua Xiao, Wenguang Wang
In this paper, we propose a scoring model to automatically learn a model-based reward, and an effective training strategy based on curriculum learning is further proposed to stabilize the training process.
no code implementations • 5 Mar 2025 • Lida Chen, Dong Xu, Chenxin An, Xintao Wang, Yikai Zhang, Jiangjie Chen, Zujie Liang, Feng Wei, Jiaqing Liang, Yanghua Xiao, Wei Wang
Large Language Models (LLMs) face efficiency bottlenecks due to the quadratic complexity of the attention mechanism when processing long contexts.
1 code implementation • 4 Mar 2025 • Caiyu Hu, Yikai Zhang, Tinghui Zhu, Yiwei Ye, Yanghua Xiao
To address this gap, we introduce MCiteBench, the first benchmark designed to evaluate and analyze the multimodal citation text generation ability of MLLMs.
no code implementations • 27 Feb 2025 • Qianxi He, Qianyu He, Jiaqing Liang, Yanghua Xiao, Weikang Zhou, Zeye Sun, Fei Yu
To address this issue, we introduce an order-centric data augmentation framework based on commutativity in logical reasoning.
1 code implementation • 26 Feb 2025 • Jiayi Fu, Xuandong Zhao, Chengyuan Yao, Heng Wang, Qi Han, Yanghua Xiao
Reinforcement Learning from Human Feedback (RLHF) is essential for aligning large language models (LLMs) with human values.
1 code implementation • 24 Feb 2025 • Jie Zeng, Qianyu He, Qingyu Ren, Jiaqing Liang, Yanghua Xiao, Weikang Zhou, Zeye Sun, Fei Yu
Real-world instructions with multiple constraints pose a significant challenge to existing large language models (LLMs).
1 code implementation • 16 Feb 2025 • Aili Chen, Chengyu Du, Jiangjie Chen, Jinghan Xu, Yikai Zhang, Siyu Yuan, Zulong Chen, Liangyue Li, Yanghua Xiao
To advance personalized applications such as recommendation systems and user behavior prediction, recent research increasingly adopts large language models (LLMs) for human -readable persona modeling.
1 code implementation • 13 Feb 2025 • Xintao Wang, Heng Wang, Yifei Zhang, Xinfeng Yuan, Rui Xu, Jen-tse Huang, Siyu Yuan, Haoran Guo, Jiangjie Chen, Wei Wang, Yanghua Xiao, Shuchang Zhou
It provides authentic dialogues with real-world intricacies, as well as diverse data types such as conversation setups, character experiences and internal thoughts.
1 code implementation • 19 Jan 2025 • Lipeng Ma, Weidong Yang, Yixuan Li, Ben Fei, Mingjie Zhou, Shuhao Li, Sihang Jiang, Bo Xu, Yanghua Xiao
Specifically, to efficiently query the LLM, we propose an adaptive selection strategy based on the uncertainty estimation of the SLM, where the LLM is invoked only when the SLM is uncertain.
no code implementations • 13 Jan 2025 • Haokun Zhao, Jinyi Han, Jiaqing Liang, Yanghua Xiao
Large Language Models (LLMs) have demonstrated outstanding capabilities across various domains, but the increasing complexity of new challenges demands enhanced performance and adaptability.
1 code implementation • 9 Jan 2025 • Qingyu Ren, Jie Zeng, Qianyu He, Jiaqing Liang, Yanghua Xiao, Weikang Zhou, Zeye Sun, Fei Yu
It is crucial for large language models (LLMs) to follow instructions that involve multiple constraints.
1 code implementation • 17 Dec 2024 • Nianqi Li, Zujie Liang, Siyu Yuan, Jiaqing Liang, Feng Wei, Yanghua Xiao
Since different programming languages excel in different areas, it is natural to use the most suitable language for solving specific problems.
no code implementations • 11 Nov 2024 • Xingzhi Guo, Silong Wang, Baojian Zhou, Yanghua Xiao, Steven Skiena
However, most PPR-based GNNs are designed for static graphs, and efficient PPR maintenance remains as an open problem.
1 code implementation • 6 Nov 2024 • Jin Xiao, Bowei Zhang, Qianyu He, Jiaqing Liang, Feng Wei, Jinglei Chen, Zujie Liang, Deqing Yang, Yanghua Xiao
To improve the LLMs' quotation generation abilities, we construct a bilingual knowledge base that is broad in scope and rich in dimensions, containing up to 32, 022 quotes.
1 code implementation • 29 Oct 2024 • Jiahe Bai, Baojian Zhou, Deqing Yang, Yanghua Xiao
Standard iterative methods require accessing the whole graph per iteration, making them time-consuming for large-scale graphs.
1 code implementation • 19 Oct 2024 • Baojian Zhou, Yifan Sun, Reza Babanezhad Harikandeh, Xingzhi Guo, Deqing Yang, Yanghua Xiao
We propose to use the \textit{locally evolving set process}, a novel framework to characterize the algorithm locality, and demonstrate that many standard solvers can be effectively localized.
no code implementations • 17 Oct 2024 • Chengyu Du, Jinyi Han, Yizhou Ying, Aili Chen, Qianyu He, Haokun Zhao, Sirui Xia, Haoran Guo, Jiaqing Liang, Zulong Chen, Liangyue Li, Yanghua Xiao
To address these limitations, we propose Progressive Thought Refinement (PTR), a framework that enables LLMs to refine their responses progressively.
1 code implementation • 16 Oct 2024 • Jian Xie, Kexun Zhang, Jiangjie Chen, Siyu Yuan, Kai Zhang, Yikai Zhang, Lei LI, Yanghua Xiao
Although existing studies have highlighted weak performance in agent planning, the deeper underlying issues and the mechanisms and limitations of the strategies proposed to address them remain insufficiently understood.
1 code implementation • 14 Oct 2024 • Xiangru Zhu, Penglei Sun, Yaoxian Song, Yanghua Xiao, Zhixu Li, Chengyu Wang, Jun Huang, Bei Yang, Xiaoxiao Xu
To address these deficiencies, we propose a novel metric called SemVarEffect and a benchmark named SemVarBench, designed to evaluate the causality between semantic variations in inputs and outputs in T2I synthesis.
no code implementations • 9 Oct 2024 • Wei Shi, Shuang Li, Kerun Yu, Jinglei Chen, Zujie Liang, Xinhui Wu, Yuxi Qian, Feng Wei, Bo Zheng, Jiaqing Liang, Jiangjie Chen, Yanghua Xiao
There is a growing interest in expanding the input capacity of language models (LMs) across various domains.
no code implementations • 23 Sep 2024 • Yuyan Chen, Tianhao Yu, Yueze Li, Songzhou Yan, Sijia Liu, Jiaqing Liang, Yanghua Xiao
Therefore, in this paper, we introduce a novel game named BrainKing based on the ``Who is undercover'' and ``Twenty Questions'' for evaluating LLM capabilities under incomplete information scenarios.
1 code implementation • 23 Sep 2024 • Nianqi Li, Siyu Yuan, Jiangjie Chen, Jiaqing Liang, Feng Wei, Zujie Liang, Deqing Yang, Yanghua Xiao
Historical analogies, which compare known past events with contemporary but unfamiliar events, are important abilities that help people make decisions and understand the world.
no code implementations • 23 Sep 2024 • Yuyan Chen, Yiwen Qian, Songzhou Yan, Jiyuan Jia, Zhixu Li, Yanghua Xiao, Xiaobo Li, Ming Yang, Qingpei Guo
In the era of social media video platforms, popular ``hot-comments'' play a crucial role in attracting user impressions of short-form videos, making them vital for marketing and branding purpose.
no code implementations • 20 Sep 2024 • Yuyan Chen, Yanghua Xiao
Emotion cognition in large language models (LLMs) is crucial for enhancing performance across various applications, such as social media, human-computer interaction, and mental health assessment.
no code implementations • 20 Sep 2024 • Yuyan Chen, Hao Wang, Songzhou Yan, Sijia Liu, Yueze Li, Yi Zhao, Yanghua Xiao
The framework includes four distinctive tasks: Key Event Recognition, Mixed Event Recognition, Implicit Emotional Recognition, and Intention Recognition.
no code implementations • 12 Sep 2024 • Aili Chen, Xuyang Ge, Ziquan Fu, Yanghua Xiao, Jiangjie Chen
As global tourism expands and artificial intelligence technology advances, intelligent travel planning services have emerged as a significant research focus.
1 code implementation • 3 Sep 2024 • Lipeng Ma, Weidong Yang, Sihang Jiang, Ben Fei, Mingjie Zhou, Shuhao Li, Mingyu Zhao, Bo Xu, Yanghua Xiao
To address the lack of expert knowledge and enhance log understanding for smaller PLMs, this paper introduces a novel and practical knowledge enhancement framework, called LUK, which acquires expert knowledge from LLMs automatically and then enhances the smaller PLM for log analysis with these expert knowledge.
no code implementations • 3 Sep 2024 • Yifeng Wang, Zhouhong Gu, Siwei Zhang, SuHang Zheng, Tao Wang, Tianyu Li, Hongwei Feng, Yanghua Xiao
Explainable fake news detection predicts the authenticity of news items with annotated explanations.
no code implementations • 20 Aug 2024 • Yuyan Chen, Chenwei Wu, Songzhou Yan, Panjun Liu, Haoyu Zhou, Yanghua Xiao
Therefore, our research introduces a benchmark to evaluate the questioning capability in education as a teacher of LLMs through evaluating their generated educational questions, utilizing Anderson and Krathwohl's taxonomy across general, monodisciplinary, and interdisciplinary domains.
no code implementations • 24 Jul 2024 • Yuyan Chen, Songzhou Yan, Zhihong Zhu, Zhixu Li, Yanghua Xiao
Humor, deeply rooted in societal meanings and cultural details, poses a unique challenge for machines.
no code implementations • 4 Jul 2024 • Yuyan Chen, Zhixu Li, Jiaqing Liang, Yanghua Xiao, Bang Liu, Yunwen Chen
Humor understanding is an important and challenging research in natural language processing.
no code implementations • 4 Jul 2024 • Yuyan Chen, Zhihao Wen, Ge Fan, Zhengyu Chen, Wei Wu, Dayiheng Liu, Zhixu Li, Bang Liu, Yanghua Xiao
Prompt engineering, as an efficient and effective way to leverage Large Language Models (LLM), has drawn a lot of attention from the research community.
no code implementations • 4 Jul 2024 • Yuyan Chen, Qiang Fu, Yichen Yuan, Zhihao Wen, Ge Fan, Dayiheng Liu, Dongmei Zhang, Zhixu Li, Yanghua Xiao
Large Language Models (LLMs) have gained widespread adoption in various natural language processing tasks, including question answering and dialogue systems.
no code implementations • 1 Jul 2024 • Sirui Xia, Xintao Wang, Jiaqing Liang, Yifei Zhang, Weikang Zhou, Jiaji Deng, Fei Yu, Yanghua Xiao
Retrieval-Augmented Generation (RAG) has been widely adopted to enhance Large Language Models (LLMs) in knowledge-intensive tasks.
no code implementations • 30 Jun 2024 • Yifei Zhang, Xintao Wang, Jiaqing Liang, Sirui Xia, Lida Chen, Yanghua Xiao
For dataset construction, we create KnowReason via rule mining on KGs.
1 code implementation • 27 Jun 2024 • Yiting Ran, Xintao Wang, Rui Xu, Xinfeng Yuan, Jiaqing Liang, Deqing Yang, Yanghua Xiao
Role-playing agents (RPA) have been a popular application area for large language models (LLMs), attracting significant interest from both industry and academia. While existing RPAs well portray the characters' knowledge and tones, they face challenges in capturing their minds, especially for small role-playing language models (RPLMs).
3 code implementations • 21 Jun 2024 • Haiquan Zhao, Lingyu Li, Shisong Chen, Shuqi Kong, Jiaan Wang, Kexin Huang, Tianle Gu, Yixu Wang, Wang Jian, Dandan Liang, Zhixu Li, Yan Teng, Yanghua Xiao, Yingchun Wang
Inspired by the awesome development of role-playing agents, we propose an ESC Evaluation framework (ESC-Eval), which uses a role-playing agent to interact with ESC models, followed by a manual evaluation of the interactive dialogues.
1 code implementation • 18 Jun 2024 • Zhouhong Gu, Lin Zhang, Xiaoxuan Zhu, Jiangjie Chen, Wenhao Huang, Yikai Zhang, Shusen Wang, Zheyu Ye, Yan Gao, Hongwei Feng, Yanghua Xiao
This paper proposes a benchmark called DetectBench for verifying the ability to detect and piece together implicit evidence within a long context.
no code implementations • 16 Jun 2024 • Lida Chen, Zujie Liang, Xintao Wang, Jiaqing Liang, Yanghua Xiao, Feng Wei, Jinglei Chen, Zhenghong Hao, Bing Han, Wei Wang
Large language models (LLMs) have achieved great success, but their occasional content fabrication, or hallucination, limits their practical application.
1 code implementation • 16 Jun 2024 • Yikai Zhang, Qianyu He, Xintao Wang, Siyu Yuan, Jiaqing Liang, Yanghua Xiao
Specifically, we introduce COG, a two-stage framework with COncept-Guided vision-language models.
no code implementations • 15 Jun 2024 • Xiaoxuan Zhu, Zhouhong Gu, Sihang Jiang, Zhixu Li, Hongwei Feng, Yanghua Xiao
Online courses have significantly lowered the barrier to accessing education, yet the varying content quality of these videos poses challenges.
1 code implementation • 15 Jun 2024 • Zhouhong Gu, Haoning Ye, Xingzhou Chen, Zeyang Zhou, Hongwei Feng, Yanghua Xiao
The effective utilization of structured data, integral to corporate data strategies, has been challenged by the rise of large language models (LLMs) capable of processing unstructured information.
no code implementations • 7 Jun 2024 • Ruihan Yang, Jiangjie Chen, Yikai Zhang, Siyu Yuan, Aili Chen, Kyle Richardson, Yanghua Xiao, Deqing Yang
Language agents powered by large language models (LLMs) are increasingly valuable as decision-making tools in domains such as gaming and programming.
no code implementations • 26 May 2024 • Ziqin Luo, Haixia Han, Haokun Zhao, Guochao Jiang, Chengyu Du, Tingyun Li, Jiaqing Liang, Deqing Yang, Yanghua Xiao
Existing Large Language Models (LLMs) generate text through unidirectional autoregressive decoding methods to respond to various user queries.
no code implementations • 28 Apr 2024 • Jiangjie Chen, Xintao Wang, Rui Xu, Siyu Yuan, Yikai Zhang, Wei Shi, Jian Xie, Shuang Li, Ruihan Yang, Tinghui Zhu, Aili Chen, Nianqi Li, Lida Chen, Caiyu Hu, Siye Wu, Scott Ren, Ziquan Fu, Yanghua Xiao
Through this work, we aim to establish a clear taxonomy of RPLA research and applications, and facilitate future research in this critical and ever-evolving field, and pave the way for a future where humans and RPLAs coexist in harmony.
1 code implementation • 24 Apr 2024 • Qianyu He, Jie Zeng, Qianxi He, Jiaqing Liang, Yanghua Xiao
It is imperative for Large language models (LLMs) to follow instructions with elaborate requirements (i. e. Complex Instructions Following).
2 code implementations • 19 Apr 2024 • Wenhao Huang, Zhouhong Gu, Chenghao Peng, Zhixu Li, Jiaqing Liang, Yanghua Xiao, Liqian Wen, Zulong Chen
In this work, we introduce the paradigm of generating web scrapers with LLMs and propose AutoScraper, a two-stage framework that can handle diverse and changing web environments more efficiently.
no code implementations • 18 Apr 2024 • Rui Xu, Xintao Wang, Jiangjie Chen, Siyu Yuan, Xinfeng Yuan, Jiaqing Liang, Zulong Chen, Xiaoqing Dong, Yanghua Xiao
Can Large Language Models (LLMs) simulate humans in making important decisions?
no code implementations • 16 Apr 2024 • Haixia Han, Tingyun Li, Shisong Chen, Jie Shi, Chengyu Du, Yanghua Xiao, Jiaqing Liang, Xin Lin
Specifically, we first identify three key problems: (1) How to capture the inherent confidence of the LLM?
1 code implementation • 15 Apr 2024 • Yuchen Shi, Deqing Yang, Jingping Liu, Yanghua Xiao, ZongYu Wang, Huimin Xu
To achieve NTE, we devise a novel Syntax&Semantic-Enhanced Negation Extraction model, namely SSENE, which is built based on a generative pretrained language model (PLM) {of Encoder-Decoder architecture} with a multi-task learning framework.
no code implementations • 15 Apr 2024 • Zepeng Ding, Wenhao Huang, Jiaqing Liang, Deqing Yang, Yanghua Xiao
The framework includes an evaluation model that can extract related entity pairs with high precision.
no code implementations • 11 Apr 2024 • Haokun Zhao, Haixia Han, Jie Shi, Chengyu Du, Jiaqing Liang, Yanghua Xiao
As world knowledge advances and new task schemas emerge, Continual Learning (CL) becomes essential for keeping Large Language Models (LLMs) current and addressing their shortcomings.
no code implementations • 9 Apr 2024 • Xintao Wang, Jiangjie Chen, Nianqi Li, Lida Chen, Xinfeng Yuan, Wei Shi, Xuyang Ge, Rui Xu, Yanghua Xiao
In the rapidly advancing research fields such as AI, managing and staying abreast of the latest scientific literature has become a significant challenge for researchers.
1 code implementation • 4 Apr 2024 • Siye Wu, Jian Xie, Jiangjie Chen, Tinghui Zhu, Kai Zhang, Yanghua Xiao
By leveraging the retrieval of information from external knowledge databases, Large Language Models (LLMs) exhibit enhanced capabilities for accomplishing many knowledge-intensive tasks.
no code implementations • 4 Apr 2024 • Yanda Li, Dixuan Wang, Jiaqing Liang, Guochao Jiang, Qianyu He, Yanghua Xiao, Deqing Yang
Large Language Models (LLMs) have demonstrated good performance in many reasoning tasks, but they still struggle with some complicated reasoning tasks including logical reasoning.
1 code implementation • 25 Mar 2024 • Wenhao Huang, Qianyu He, Zhixu Li, Jiaqing Liang, Yanghua Xiao
Definition bias is a negative phenomenon that can mislead models.
1 code implementation • 20 Mar 2024 • Zhouhong Gu, Xiaoxuan Zhu, Haoran Guo, Lin Zhang, Yin Cai, Hao Shen, Jiangjie Chen, Zheyu Ye, Yifei Dai, Yan Gao, Yao Hu, Hongwei Feng, Yanghua Xiao
Language significantly influences the formation and evolution of Human emergent behavior, which is crucial in understanding collective intelligence within human societies.
no code implementations • 14 Mar 2024 • Yuncheng Huang, Qianyu He, Yipei Xu, Jiaqing Liang, Yanghua Xiao
In our experiments, we find that atomic skills can not spontaneously generalize to compositional tasks.
no code implementations • 12 Mar 2024 • Jianchen Wang, Zhouhong Gu, Xiaoxuan Zhu, Lin Zhang, Haoning Ye, Zhuozhi Xiong, Hongwei Feng, Yanghua Xiao
Large Language Models have revolutionized numerous tasks with their remarkable efficacy.
no code implementations • 3 Mar 2024 • Haiquan Zhao, Xuwu Wang, Shisong Chen, Zhixu Li, Xin Zheng, Yanghua Xiao
In this paper, we propose a task called Online Video Entity Linking OVEL, aiming to establish connections between mentions in online videos and a knowledge base with high accuracy and timeliness.
1 code implementation • 20 Feb 2024 • Jiayi Fu, Xuandong Zhao, Ruihan Yang, Yuansen Zhang, Jiangjie Chen, Yanghua Xiao
Large language models (LLMs) excellently generate human-like text, but also raise concerns about misuse in fake news and academic dishonesty.
no code implementations • 8 Feb 2024 • Yikai Zhang, Siyu Yuan, Caiyu Hu, Kyle Richardson, Yanghua Xiao, Jiangjie Chen
Despite remarkable advancements in emulating human-like behavior through Large Language Models (LLMs), current textual simulations do not adequately address the notion of time.
2 code implementations • 2 Feb 2024 • Jian Xie, Kai Zhang, Jiangjie Chen, Tinghui Zhu, Renze Lou, Yuandong Tian, Yanghua Xiao, Yu Su
Are these language agents capable of planning in more complex settings that are out of the reach of prior AI agents?
1 code implementation • 20 Jan 2024 • Zhen Chen, Jingping Liu, Deqing Yang, Yanghua Xiao, Huimin Xu, ZongYu Wang, Rui Xie, Yunsen Xian
Open information extraction (OpenIE) aims to extract the schema-free triplets in the form of (\emph{subject}, \emph{predicate}, \emph{object}) from a given sentence.
no code implementations • 14 Jan 2024 • Haixia Han, Jiaqing Liang, Jie Shi, Qianyu He, Yanghua Xiao
In this paper, we introduce the \underline{I}ntrinsic \underline{S}elf-\underline{C}orrection (ISC) in generative language models, aiming to correct the initial output of LMs in a self-triggered manner, even for those small LMs with 6 billion parameters.
no code implementations • 11 Jan 2024 • Xintao Wang, Zhouhong Gu, Jiaqing Liang, Dakuan Lu, Yanghua Xiao, Wei Wang
In this paper, we propose ConcEPT, which stands for Concept-Enhanced Pre-Training for language models, to infuse conceptual knowledge into PLMs.
no code implementations • 29 Dec 2023 • Yuncheng Huang, Qianyu He, Jiaqing Liang, Sihang Jiang, Yanghua Xiao, Yunwen Chen
Hence, we present a framework to enhance the quantitative reasoning ability of language models based on dimension perception.
1 code implementation • 16 Dec 2023 • Zhiwei Zha, Jiaan Wang, Zhixu Li, Xiangru Zhu, Wei Song, Yanghua Xiao
Comprising 951K images and 152K concepts, M^2ConceptBase links each concept to an average of 6. 27 images and a single description, ensuring comprehensive visual and textual semantics.
1 code implementation • 4 Dec 2023 • Xiangru Zhu, Penglei Sun, Chengyu Wang, Jingping Liu, Zhixu Li, Yanghua Xiao, Jun Huang
We use Winoground-T2I with a dual objective: to evaluate the performance of T2I models and the metrics used for their evaluation.
no code implementations • 16 Nov 2023 • Yipei Xu, Dakuan Lu, Jiaqing Liang, Xintao Wang, Yipeng Geng, Yingsi Xin, Hengkui Wu, Ken Chen, ruiji zhang, Yanghua Xiao
Pre-trained language models (PLMs) have established the new paradigm in the field of NLP.
2 code implementations • 27 Oct 2023 • Xintao Wang, Yunze Xiao, Jen-tse Huang, Siyu Yuan, Rui Xu, Haoran Guo, Quan Tu, Yaying Fei, Ziang Leng, Wei Wang, Jiangjie Chen, Cheng Li, Yanghua Xiao
Then, with InCharacter, we show that state-of-the-art RPAs exhibit personalities highly aligned with the human-perceived personalities of the characters, achieving an accuracy up to 80. 7%.
2 code implementations • 17 Sep 2023 • Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao
To bridge this gap, we propose CELLO, a benchmark for evaluating LLMs' ability to follow complex instructions systematically.
1 code implementation • 12 Sep 2023 • Tinghui Zhu, Jingping Liu, Jiaqing Liang, Haiyun Jiang, Yanghua Xiao, ZongYu Wang, Rui Xie, Yunsen Xian
Specifically, on the Chinese taxonomy dataset, our method significantly improves accuracy by 8. 75 %.
1 code implementation • 26 Aug 2023 • Shuang Li, Jiangjie Chen, Siyu Yuan, Xinyi Wu, Hao Yang, Shimin Tao, Yanghua Xiao
To translate well, machine translation (MT) systems and general-purposed language models (LMs) need a deep understanding of both source and target languages and cultures.
no code implementations • 17 Aug 2023 • Xintao Wang, Qianwen Yang, Yongting Qiu, Jiaqing Liang, Qianyu He, Zhouhong Gu, Yanghua Xiao, Wei Wang
Large language models (LLMs) have demonstrated impressive impact in the field of natural language processing, but they still struggle with several issues regarding, such as completeness, timeliness, faithfulness and adaptability.
1 code implementation • 9 Aug 2023 • Jingdan Zhang, Jiaan Wang, Xiaodan Wang, Zhixu Li, Yanghua Xiao
Multi-modal knowledge graphs (MMKGs) combine different modal data (e. g., text and image) for a comprehensive understanding of entities.
no code implementations • 11 Jul 2023 • Zhouhong Gu, Lin Zhang, Jiangjie Chen, Haoning Ye, Xiaoxuan Zhu, Zihan Li, Zheyu Ye, Yan Gao, Yao Hu, Yanghua Xiao, Hongwei Feng
We introduces the DetectBench, a reading comprehension dataset designed to assess a model's ability to jointly ability in key information detection and multi-hop reasoning when facing complex and implicit information.
1 code implementation • 19 Jun 2023 • Wenhao Huang, Jiaqing Liang, Zhixu Li, Yanghua Xiao, Chuanjun Ji
Information extraction (IE) has been studied extensively.
no code implementations • 16 Jun 2023 • Jingsong Yang, Guanzhou Han, Deqing Yang, Jingping Liu, Yanghua Xiao, Xiang Xu, Baohua Wu, Shenghua Ni
In this paper, we propose a novel Multi-Modal Model for POI Tagging, namely M3PT, which achieves enhanced POI tagging through fusing the target POI's textual and visual features, and the precise matching between the multi-modal representations.
1 code implementation • 13 Jun 2023 • Qianyu He, Yikai Zhang, Jiaqing Liang, Yuncheng Huang, Yanghua Xiao, Yunwen Chen
Similes play an imperative role in creative writing such as story and dialogue generation.
1 code implementation • 11 Jun 2023 • Jian Xie, Yidan Liang, Jingping Liu, Yanghua Xiao, Baohua Wu, Shenghua Ni
In this paper, we propose QUERT, A Continual Pre-trained Language Model for QUERy Understanding in Travel Domain Search.
2 code implementations • 9 Jun 2023 • Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Yixin Zhu, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Weijie Wu, Qianyu He, Rui Xu, Wenhao Huang, Jingping Liu, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align with the rapid development of large language models (LLMs).
1 code implementation • 22 May 2023 • Siyu Yuan, Jiangjie Chen, Xuyang Ge, Yanghua Xiao, Deqing Yang
The vital role of analogical reasoning in human cognition allows us to grasp novel concepts by linking them with familiar ones through shared relational structures.
1 code implementation • 10 May 2023 • Jiangjie Chen, Wei Shi, Ziquan Fu, Sijie Cheng, Lei LI, Yanghua Xiao
Large language models (LLMs) have been widely studied for their ability to store and utilize positive knowledge.
1 code implementation • 10 May 2023 • Siyu Yuan, Jiangjie Chen, Changzhi Sun, Jiaqing Liang, Yanghua Xiao, Deqing Yang
Analogical reasoning is a fundamental cognitive ability of humans.
1 code implementation • 9 May 2023 • Siyu Yuan, Jiangjie Chen, Ziquan Fu, Xuyang Ge, Soham Shah, Charles Robert Jankowski, Yanghua Xiao, Deqing Yang
In everyday life, humans often plan their actions by following step-by-step instructions in the form of goal-oriented scripts.
1 code implementation • 3 May 2023 • Siyu Yuan, Deqing Yang, Jinxi Liu, Shuyu Tian, Jiaqing Liang, Yanghua Xiao, Rui Xie
The prompt adopts the topic of the given entity from the existing knowledge in KGs to mitigate the spurious co-occurrence correlations between entities and biased concepts.
no code implementations • 23 Apr 2023 • Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Zhuozhi Xiong, Zihan Li, Qianyu He, Sihang Jiang, Hongwei Feng, Yanghua Xiao
Domain knowledge refers to the in-depth understanding, expertise, and familiarity with a specific subject, industry, field, or area of special interest.
no code implementations • 25 Mar 2023 • Zhouhong Gu, Sihang Jiang, Wenhao Huang, Jiaqing Liang, Hongwei Feng, Yanghua Xiao
The model's ability to understand synonymous expression is crucial in many kinds of downstream tasks.
no code implementations • 25 Mar 2023 • Zhouhong Gu, Sihang Jiang, Jingping Liu, Yanghua Xiao, Hongwei Feng, Zhixu Li, Jiaqing Liang, Jian Zhong
The previous methods suffer from low-efficiency since they waste much time when most of the new coming concepts are indeed noisy concepts.
2 code implementations • 18 Feb 2023 • Dakuan Lu, Hengkui Wu, Jiaqing Liang, Yipei Xu, Qianyu He, Yipeng Geng, Mengkun Han, Yingsi Xin, Yanghua Xiao
Our aim is to facilitate research in the development of NLP within the Chinese financial domain.
2 code implementations • 10 Dec 2022 • Qianyu He, Xintao Wang, Jiaqing Liang, Yanghua Xiao
The ability to understand and generate similes is an imperative step to realize human-level AI.
no code implementations • 7 Dec 2022 • Jiangjie Chen, Yanghua Xiao
The rapid development and application of natural language generation (NLG) techniques has revolutionized the field of automatic text production.
1 code implementation • 25 Nov 2022 • Shuoyao Zhai, Baichuan Liu, Deqing Yang, Yanghua Xiao
Furthermore, we propose two auxiliary losses corresponding to the two sub-tasks, to refine the representation learning in our model.
1 code implementation • 22 Nov 2022 • Jiangjie Chen, Rui Xu, Wenxuan Zeng, Changzhi Sun, Lei LI, Yanghua Xiao
Given a possibly false claim sentence, how can we automatically correct it with minimal editing?
no code implementations • COLING 2022 • Chengwei Hu, Deqing Yang, Haoliang Jin, Zhen Chen, Yanghua Xiao
Continual relation extraction (CRE) aims to extract relations towards the continuous and iterative arrival of new data, of which the major challenge is the catastrophic forgetting of old tasks.
1 code implementation • 6 Oct 2022 • Siyu Yuan, Deqing Yang, Jiaqing Liang, Zhixu Li, Jinxi Liu, Jingyue Huang, Yanghua Xiao
To overcome these drawbacks, we propose a novel generative entity typing (GET) paradigm: given a text with an entity mention, the multiple types for the role that the entity plays in the text are generated with a pre-trained language model (PLM).
1 code implementation • 30 Aug 2022 • Siyu Yuan, Deqing Yang, Jiaqing Liang, Jilun Sun, Jingyue Huang, Kaiyan Cao, Yanghua Xiao, Rui Xie
In order to supply existing KGs with more fine-grained and new concepts, we propose a novel concept extraction framework, namely MRC-CE, to extract large-scale multi-granular concepts from the descriptive texts of entities.
1 code implementation • 27 Jul 2022 • Lyuxin Xue, Deqing Yang, Yanghua Xiao
Most sequential recommendation (SR) systems employing graph neural networks (GNNs) only model a user's interaction sequence as a flat graph without hierarchy, overlooking diverse factors in the user's preference.
1 code implementation • 27 Jul 2022 • Jingjie Yi, Deqing Yang, Siyu Yuan, Caiyan Cao, Zhiyao Zhang, Yanghua Xiao
The newly proposed ERC models have leveraged pre-trained language models (PLMs) with the paradigm of pre-training and fine-tuning to obtain good performance.
1 code implementation • 25 Jun 2022 • Xintao Wang, Qianyu He, Jiaqing Liang, Yanghua Xiao
In this paper, we propose LMKE, which adopts Language Models to derive Knowledge Embeddings, aiming at both enriching representations of long-tail entities and solving problems of prior description-based methods.
Ranked #5 on
Link Prediction
on WN18RR
no code implementations • 17 May 2022 • Ailisi Li, Xueyao Jiang, Bang Liu, Jiaqing Liang, Yanghua Xiao
Math Word Problems (MWP) is an important task that requires the ability of understanding and reasoning over mathematical text.
1 code implementation • NAACL 2022 • Chun Zeng, Jiangjie Chen, Tianyi Zhuang, Rui Xu, Hao Yang, Ying Qin, Shimin Tao, Yanghua Xiao
To this end, we propose a plug-in algorithm for this line of work, i. e., Aligned Constrained Training (ACT), which alleviates this problem by familiarizing the model with the source-side context of the constraints.
3 code implementations • ACL 2022 • Xuwu Wang, Junfeng Tian, Min Gui, Zhixu Li, Rui Wang, Ming Yan, Lihan Chen, Yanghua Xiao
In this paper, we present WikiDiverse, a high-quality human-annotated MEL dataset with diversified contextual topics and entity types from Wikinews, which uses Wikipedia as the corresponding knowledge base.
1 code implementation • 28 Mar 2022 • Sijie Cheng, Zhouhong Gu, Bang Liu, Rui Xie, Wei Wu, Yanghua Xiao
Specifically, i) to fully exploit user behavioral information, we extract candidate hyponymy relations that match user interests from query-click concepts; ii) to enhance the semantic information of new concepts and better detect hyponymy relations, we model concepts and relations through both user-generated content and structural information in existing taxonomies and user click logs, by leveraging Pre-trained Language Models and Graph Neural Network combined with Contrastive Learning; iii) to reduce the cost of dataset construction and overcome data skews, we construct a high-quality and balanced training dataset from existing taxonomy with no supervision.
no code implementations • Findings (ACL) 2022 • Jiangjie Chen, Rui Xu, Ziquan Fu, Wei Shi, Zhongqiao Li, Xinbo Zhang, Changzhi Sun, Lei LI, Yanghua Xiao, Hao Zhou
Holding the belief that models capable of reasoning should be right for the right reasons, we propose a first-of-its-kind Explainable Knowledge-intensive Analogical Reasoning benchmark (E-KAR).
1 code implementation • ACL 2022 • Qianyu He, Sijie Cheng, Zhixu Li, Rui Xie, Yanghua Xiao
In this paper, we investigate the ability of PLMs in simile interpretation by designing a novel task named Simile Property Probing, i. e., to let the PLMs infer the shared properties of similes.
no code implementations • 21 Feb 2022 • Lihan Chen, Sihang Jiang, Jingping Liu, Chao Wang, Sheng Zhang, Chenhao Xie, Jiaqing Liang, Yanghua Xiao, Rui Song
Knowledge graphs (KGs) are an important source repository for a wide range of applications and rule mining from KGs recently attracts wide research interest in the KG-related research community.
no code implementations • 11 Feb 2022 • Xiangru Zhu, Zhixu Li, Xiaodan Wang, Xueyao Jiang, Penglei Sun, Xuwu Wang, Yanghua Xiao, Nicholas Jing Yuan
In this survey on MMKGs constructed by texts and images, we first give definitions of MMKGs, followed with the preliminaries on multi-modal tasks and techniques.
no code implementations • 13 Jan 2022 • Yuyan Chen, Yanghua Xiao, Bang Liu
In this research, we argue that the evidences of an answer is critical to enhancing the interpretability of QA models.
no code implementations • 7 Jan 2022 • Ailisi Li, Jiaqing Liang, Yanghua Xiao
In this paper, we propose a set of novel data augmentation approaches to supplement existing datasets with such data that are augmented with different kinds of local variances, and help to improve the generalization ability of current neural models.
1 code implementation • 10 Dec 2021 • Jiangjie Chen, Chun Gan, Sijie Cheng, Hao Zhou, Yanghua Xiao, Lei LI
We also propose a new metric to alleviate the shortcomings of current automatic metrics and better evaluate the trade-off.
no code implementations • 27 Nov 2021 • Jianian Wang, Sheng Zhang, Yanghua Xiao, Rui Song
With multiple components and relations, financial data are often presented as graph data, since it could represent both the individual features and the complicated relations.
no code implementations • 6 Nov 2021 • Ye Liu, Rui Song, Wenbin Lu, Yanghua Xiao
A large number of models and algorithms have been proposed to perform link prediction, among which tensor factorization method has proven to achieve state-of-the-art performance in terms of computation efficiency and prediction accuracy.
no code implementations • 21 Oct 2021 • Sijie Cheng, Jingwen Wu, Yanghua Xiao, Yang Liu
Today data is often scattered among billions of resource-constrained edge devices with security and privacy constraints.
no code implementations • 2 Aug 2021 • Junyang Huang, Yongbo Wang, Yongliang Wang, Yang Dong, Yanghua Xiao
It first learns relation embedding over the schema entities and question words with predefined schema relations with ELECTRA and relation aware transformer layer as backbone.
1 code implementation • ACL 2021 • Li Cui, Deqing Yang, Jiaxin Yu, Chengwei Hu, Jiayang Cheng, Jingjie Yi, Yanghua Xiao
As a typical task of continual learning, continual relation extraction (CRE) aims to extract relations between entities from texts, where the samples of different relations are delivered into the model continuously.
1 code implementation • ACL 2021 • Chenhao Xie, Jiaqing Liang, Jingping Liu, Chengsong Huang, Wenhao Huang, Yanghua Xiao
Next, we formulate the problem of relation extraction into as a positive unlabeled learning task to alleviate false negative problem.
Ranked #1 on
Relation Extraction
on NYT11-HRL
no code implementations • 7 Apr 2021 • Jiayang Cheng, Haiyun Jiang, Deqing Yang, Yanghua Xiao
However, few works have focused on how to validate and correct the results generated by the existing relation extraction models.
1 code implementation • 25 Dec 2020 • Jiangjie Chen, Qiaoben Bao, Changzhi Sun, Xinbo Zhang, Jiaze Chen, Hao Zhou, Yanghua Xiao, Lei LI
The final claim verification is based on all latent variables.
no code implementations • 17 Dec 2020 • Zhendong Chu, Haiyun Jiang, Yanghua Xiao, Wei Wang
We see information sources as multiple views and fusing them to construct an intact space with sufficient information.
no code implementations • 9 Dec 2020 • Haiyun Jiang, Qiaoben Bao, Qiao Cheng, Deqing Yang, Li Wang, Yanghua Xiao
In recent years, many complex relation extraction tasks, i. e., the variants of simple binary relation extraction, are proposed to meet the complex applications in practice.
1 code implementation • EMNLP 2020 • Ye Liu, Sheng Zhang, Rui Song, Suo Feng, Yanghua Xiao
Effectively filtering out noisy articles as well as bad answers is the key to improving extraction accuracy.
1 code implementation • 19 Jun 2020 • Junyang Jiang, Deqing Yang, Yanghua Xiao, Chenlu Shen
Most of existing embedding based recommendation models use embeddings (vectors) corresponding to a single fixed point in low-dimensional space, to represent users and items.
no code implementations • 18 Jun 2020 • Deqing Yang, Zengcun Song, Lvxin Xue, Yanghua Xiao
Deep neural networks (DNNs) have been widely employed in recommender systems including incorporating attention mechanism for performance improvement.
1 code implementation • 12 Jun 2020 • Wenjing Meng, Deqing Yang, Yanghua Xiao
These insights motivate us to propose a novel SR model MKM-SR in this paper, which incorporates user Micro-behaviors and item Knowledge into Multi-task learning for Session-based Recommendation.
2 code implementations • 17 May 2020 • Chen Lin, Si Chen, Hui Li, Yanghua Xiao, Lianyun Li, Qian Yang
Recommendation Systems (RS) have become an essential part of many online services.
no code implementations • 6 May 2020 • Chenhao Xie, Qiao Cheng, Jiaqing Liang, Lihan Chen, Yanghua Xiao
On the contrary, traditional machine learning algorithms often rely on negative examples, otherwise the model would be prone to collapse and always-true predictions.
no code implementations • IJCNLP 2019 • Xiaofei Shi, Yanghua Xiao
We calibrate embeddings of different KGs via a small set of pre-aligned seeds.
no code implementations • 14 Oct 2019 • Hao Cheng, Xiaoqing Yang, Zang Li, Yanghua Xiao, Yu-Cheng Lin
Deep neural networks have been widely used in text classification.
1 code implementation • 28 Aug 2019 • Yuting Ye, Xuwu Wang, Jiangchao Yao, Kunyang Jia, Jingren Zhou, Yanghua Xiao, Hongxia Yang
Low-dimensional embeddings of knowledge graphs and behavior graphs have proved remarkably powerful in varieties of tasks, from predicting unobserved edges between entities to content recommendation.
no code implementations • ACL 2019 • Jiangjie Chen, Ao Wang, Haiyun Jiang, Suo Feng, Chenguang Li, Yanghua Xiao
A type description is a succinct noun compound which helps human and machines to quickly grasp the informative and distinctive information of an entity.
no code implementations • 6 Mar 2019 • Wanyun Cui, Yanghua Xiao, Haixun Wang, Yangqiu Song, Seung-won Hwang, Wei Wang
Based on these templates, our QA system KBQA effectively supports binary factoid questions, as well as complex questions which are composed of a series of binary factoid questions.
no code implementations • 27 Feb 2019 • Jindong Chen, Ao Wang, Jiangjie Chen, Yanghua Xiao, Zhendong Chu, Jingping Liu, Jiaqing Liang, Wei Wang
Taxonomies play an important role in machine intelligence.
1 code implementation • 21 Feb 2019 • Jindong Chen, Yizhou Hu, Jingping Liu, Yanghua Xiao, Haiyun Jiang
For the purpose of measuring the importance of knowledge, we introduce attention mechanisms and propose deep Short Text Classification with Knowledge powered Attention (STCKA).
no code implementations • 20 Oct 2017 • Wanyun Cui, Xiyou Zhou, Hangyu Lin, Yanghua Xiao, Haixun Wang, Seung-won Hwang, Wei Wang
In this paper, we introduce verb patterns to represent verbs' semantics, such that each pattern corresponds to a single semantic of the verb.
no code implementations • 29 Nov 2015 • Yi Zhang, Yanghua Xiao, Seung-won Hwang, Haixun Wang, X. Sean Wang, Wei Wang
This paper provides a query processing method based on the relevance models between entity sets and concepts.