1 code implementation • NAACL 2022 • Meihan Tong, Bin Xu, Shuai Wang, Meihuan Han, Yixin Cao, Jiangqi Zhu, Siyu Chen, Lei Hou, Juanzi Li
Event extraction aims to identify an event and then extract the arguments participating in the event.
1 code implementation • Findings (ACL) 2022 • Xin Lv, Yankai Lin, Yixin Cao, Lei Hou, Juanzi Li, Zhiyuan Liu, Peng Li, Jie zhou
In recent years, pre-trained language models (PLMs) have been shown to capture factual knowledge from massive texts, which encourages the proposal of PLM-based knowledge graph completion (KGC) models.
no code implementations • ACL 2022 • Yubo Ma, Zehao Wang, Mukai Li, Yixin Cao, Meiqi Chen, Xinze Li, Wenqi Sun, Kunquan Deng, Kun Wang, Aixin Sun, Jing Shao
Events are fundamental building blocks of real-world happenings.
no code implementations • 27 Mar 2024 • Meiqi Chen, Yixin Cao, Yan Zhang, Chaochao Lu
Within our framework, we devise a causal graph to elucidate the predictions of MLLMs on VQA problems, and assess the causal effect of biases through an in-depth causal analysis.
no code implementations • 14 Mar 2024 • Kai Xiong, Xiao Ding, Ting Liu, Bing Qin, Dongliang Xu, Qing Yang, Hongtao Liu, Yixin Cao
Large language models (LLMs) have developed impressive performance and strong explainability across various reasoning scenarios, marking a significant stride towards mimicking human-like intelligence.
1 code implementation • 20 Feb 2024 • Hao Peng, Xiaozhi Wang, Chunyang Li, Kaisheng Zeng, Jiangshan Duo, Yixin Cao, Lei Hou, Juanzi Li
However, natural knowledge updates in the real world come from the occurrences of new events rather than direct changes in factual triplets.
no code implementations • 19 Feb 2024 • Jiahao Ying, Yixin Cao, Bo wang, Wei Tang, Yizhe Yang, Shuicheng Yan
The basic idea is to generate unseen and high-quality testing samples based on existing ones to mitigate leakage issues.
no code implementations • 18 Feb 2024 • Yubo Ma, Zhibin Gou, Junheng Hao, Ruochen Xu, Shuohang Wang, Liangming Pan, Yujiu Yang, Yixin Cao, Aixin Sun, Hany Awadalla, Weizhu Chen
To make this task more practical and solvable for LLMs, we introduce a new task setting named tool-augmented scientific reasoning.
no code implementations • 17 Dec 2023 • Wei Tang, Zhiqian Wu, Yixin Cao, Yong Liao, Pengyuan Zhou
As such, the aggregated language model can leverage complementary knowledge from multilingual KGs without demanding raw user data sharing.
1 code implementation • 2 Dec 2023 • Yunshan Ma, Chenchen Ye, Zijian Wu, Xiang Wang, Yixin Cao, Liang Pang, Tat-Seng Chua
Temporal complex event forecasting aims to predict the future events given the observed events from history.
no code implementations • 16 Nov 2023 • Yuhan Sun, Mukai Li, Yixin Cao, Kun Wang, Wenxiao Wang, Xingyu Zeng, Rui Zhao
In response, we introduce ControlPE (Continuously Controllable Prompt Engineering).
no code implementations • 15 Nov 2023 • Minqian Liu, Ying Shen, Zhiyang Xu, Yixin Cao, Eunah Cho, Vaibhav Kumar, Reza Ghanadan, Lifu Huang
Natural Language Generation (NLG) typically involves evaluating the generated text in various aspects (e. g., consistency and naturalness) to obtain a comprehensive assessment.
no code implementations • 13 Nov 2023 • Haowen Pan, Yixin Cao, Xiaozhi Wang, Xun Yang
Multi-modal large language models (LLM) have achieved powerful capabilities for visual semantic understanding in recent years.
1 code implementation • 19 Oct 2023 • Zhiyuan Liu, Sihang Li, Yanchen Luo, Hao Fei, Yixin Cao, Kenji Kawaguchi, Xiang Wang, Tat-Seng Chua
MolCA enables an LM (e. g., Galactica) to understand both text- and graph-based molecular contents via the cross-modal projector.
Ranked #4 on Molecule Captioning on ChEBI-20
1 code implementation • 18 Oct 2023 • Ruihao Shui, Yixin Cao, Xiang Wang, Tat-Seng Chua
Large language models (LLMs) have demonstrated great potential for domain-specific applications, such as the law domain.
1 code implementation • 13 Oct 2023 • Meiqi Chen, Yubo Ma, Kaitao Song, Yixin Cao, Yan Zhang, Dongsheng Li
Large language models (LLMs) have gained enormous attention from both academia and industry, due to their exceptional ability in language generation and extremely powerful generalization.
no code implementations • 29 Sep 2023 • Jiahao Ying, Yixin Cao, Kai Xiong, Yidong He, Long Cui, Yongbin Liu
Drawing on cognitive theory, we target the first scenario of decision-making styles where there is no superiority in the conflict and categorize LLMs' preference into dependent, intuitive, and rational/irrational styles.
1 code implementation • 10 Sep 2023 • Yan Meng, Liangming Pan, Yixin Cao, Min-Yen Kan
We introduce the task of real-world information-seeking follow-up question generation (FQG), which aims to generate follow-up questions seeking a more in-depth understanding of an initial question and answer.
1 code implementation • 12 Aug 2023 • Yunshan Ma, Chenchen Ye, Zijian Wu, Xiang Wang, Yixin Cao, Tat-Seng Chua
The task of event forecasting aims to model the relational and temporal patterns based on historical events and makes forecasting to what will happen in the future.
no code implementations • 9 Aug 2023 • Yu Zhao, Hao Fei, Yixin Cao, Bobo Li, Meishan Zhang, Jianguo Wei, Min Zhang, Tat-Seng Chua
A scene-event mapping mechanism is first designed to bridge the gap between the underlying scene structure and the high-level event semantic structure, resulting in an overall hierarchical scene-event (termed ICE) graph structure.
no code implementations • 29 Jun 2023 • Tao He, Ming Liu, Yixin Cao, Zekun Wang, Zihao Zheng, Zheng Chu, Bing Qin
The proposed approach comprises two main components: a GNN-based predictor and a reasoning path distiller.
no code implementations • 23 May 2023 • Moxin Li, Wenjie Wang, Fuli Feng, Yixin Cao, Jizhi Zhang, Tat-Seng Chua
In this light, we propose a new problem of robust prompt optimization for LLMs against distribution shifts, which requires the prompt optimized over the labeled source group can simultaneously generalize to an unlabeled target group.
1 code implementation • 19 May 2023 • Kai Xiong, Xiao Ding, Yixin Cao, Ting Liu, Bing Qin
Through extensive experiments on various datasets, LLMs can effectively collaborate to reach a consensus despite noticeable inter-inconsistencies, but imbalances in their abilities can lead to domination by superior LLMs.
1 code implementation • 19 May 2023 • Shengqiong Wu, Hao Fei, Yixin Cao, Lidong Bing, Tat-Seng Chua
First, we represent the fine-grained semantic structures of the input image and text with the visual and textual scene graphs, which are further fused into a unified cross-modal graph (CMG).
1 code implementation • 18 May 2023 • Xinze Li, Yixin Cao, Muhao Chen, Aixin Sun
Goal-oriented Script Generation is a new task of generating a list of steps that can fulfill the given goal.
1 code implementation • 3 May 2023 • Yubo Ma, Zehao Wang, Yixin Cao, Aixin Sun
Few-shot event detection (ED) has been widely studied, while this brings noticeable discrepancies, e. g., various motivations, tasks, and experimental settings, that hinder the understanding of models for future progress. This paper presents a thorough empirical study, a unified view of ED models, and a better unified baseline.
1 code implementation • 15 Mar 2023 • Yubo Ma, Yixin Cao, YongChing Hong, Aixin Sun
Large Language Models (LLMs) have made remarkable strides in various tasks.
1 code implementation • 22 Oct 2022 • Hao Wang, Yixin Cao, Yangguang Li, Zhen Huang, Kun Wang, Jing Shao
Document-level natural language inference (DOCNLI) is a new challenging task in natural language processing, aiming at judging the entailment relationship between a pair of hypothesis and premise documents.
1 code implementation • 11 Oct 2022 • Linhai Zhuo, Yuqian Fu, Jingjing Chen, Yixin Cao, Yu-Gang Jiang
The proposed TGDM framework contains a Mixup-3T network for learning classifiers and a dynamic ratio generation network (DRGN) for learning the optimal mix ratio.
no code implementations • 14 Jul 2022 • Weijian Chen, Yixin Cao, Fuli Feng, Xiangnan He, Yongdong Zhang
On the one hand, their performance will dramatically degrade along with the increasing sparsity of KGs.
no code implementations • 4 Jul 2022 • Tao He, Ming Liu, Yixin Cao, Tianwen Jiang, Zihao Zheng, Jingrun Zhang, Sendong Zhao, Bing Qin
In this paper, we solve the sparse KGC from these two motivations simultaneously and handle their respective drawbacks further, and propose a plug-and-play unified framework VEM$^2$L over sparse KGs.
no code implementations • COLING 2022 • Meiqi Chen, Yixin Cao, Kunquan Deng, Mukai Li, Kun Wang, Jing Shao, Yan Zhang
In this paper, we propose a novel Event Relational Graph TransfOrmer (ERGO) framework for DECI, which improves existing state-of-the-art (SOTA) methods upon two aspects.
1 code implementation • ACL 2022 • Yubo Ma, Zehao Wang, Yixin Cao, Mukai Li, Meiqi Chen, Kun Wang, Jing Shao
We have conducted extensive experiments on three benchmarks, including both sentence- and document-level EAE.
2 code implementations • 16 Feb 2022 • Shumin Deng, Yubo Ma, Ningyu Zhang, Yixin Cao, Bryan Hooi
Information Extraction (IE) seeks to derive structured information from unstructured texts, often facing challenges in low-resource scenarios due to data scarcity and unseen classes.
no code implementations • 18 Jan 2022 • Li Lin, Yixin Cao, Lifu Huang, Shu'ang Li, Xuming Hu, Lijie Wen, Jianmin Wang
To alleviate the knowledge forgetting issue, we design two modules, Im and Gm, for each type of knowledge, which are combined via prompt tuning.
no code implementations • 17 Jan 2022 • Kaisheng Zeng, Zhenhao Dong, Lei Hou, Yixin Cao, Minghao Hu, Jifan Yu, Xin Lv, Juanzi Li, Ling Feng
Self-supervised entity alignment (EA) aims to link equivalent entities across different knowledge graphs (KGs) without seed alignments.
1 code implementation • 14 Jan 2022 • Zhiyuan Liu, Yixin Cao, Fuli Feng, Xiang Wang, Jie Tang, Kenji Kawaguchi, Tat-Seng Chua
We present a framework of Training Free Graph Matching (TFGM) to boost the performance of Graph Neural Networks (GNNs) based graph matching, providing a fast promising solution without training (training-free).
no code implementations • 29 Sep 2021 • Changyi Xiao, Xiangnan He, Yixin Cao
Based on the general form, we show the principles of model design to satisfy logical rules.
1 code implementation • ACL 2021 • Yixin Cao, Xiang Ji, Xin Lv, Juanzi Li, Yonggang Wen, Hanwang Zhang
We present InferWiki, a Knowledge Graph Completion (KGC) dataset that improves upon existing benchmarks in inferential ability, assumptions, and patterns.
1 code implementation • ACL 2021 • Zikun Hu, Yixin Cao, Lifu Huang, Tat-Seng Chua
In this paper, we contribute a dataset and propose a paradigm to quantitatively evaluate the effect of attention and KG on bag-level relation extraction (RE).
1 code implementation • 26 Jul 2021 • Zikun Hu, Yixin Cao, Lifu Huang, Tat-Seng Chua
In this paper, we contribute a dataset and propose a paradigm to quantitatively evaluate the effect of attention and KG on bag-level relation extraction (RE).
1 code implementation • ACL 2021 • Meihan Tong, Shuai Wang, Bin Xu, Yixin Cao, Minghui Liu, Lei Hou, Juanzi Li
Few-shot Named Entity Recognition (NER) exploits only a handful of annotations to identify and classify named entity mentions.
1 code implementation • EMNLP 2021 • Xin Lv, Yixin Cao, Lei Hou, Juanzi Li, Zhiyuan Liu, Yichi Zhang, Zelin Dai
However, we find in experiments that many paths given by these models are actually unreasonable, while little works have been done on interpretability evaluation for them.
no code implementations • 1 Feb 2021 • Yixin Cao, Chuanwei Zou, Xianfeng Cheng
Flash Loan attack can grab millions of dollars from decentralized vaults in one single transaction, drawing increasing attention from the Decentralized Finance (DeFi) players.
1 code implementation • 27 Nov 2020 • Yixin Cao, Jun Kuang, Ming Gao, Aoying Zhou, Yonggang Wen, Tat-Seng Chua
In this paper, we propose a general approach to learn relation prototypesfrom unlabeled texts, to facilitate the long-tail relation extraction by transferring knowledge from the relation types with sufficient trainingdata.
1 code implementation • EMNLP 2020 • Yixin Cao, Liangming Pan, Juanzi Li, Zhiyuan Liu, Tat-Seng Chua
GNN-based EA methods present promising performances by modeling the KG structure defined by relation triples.
no code implementations • 6 Jul 2020 • Xun Yang, Jianfeng Dong, Yixin Cao, Xun Wang, Meng Wang, Tat-Seng Chua
To facilitate video retrieval with complex queries, we propose a Tree-augmented Cross-modal Encoding method by jointly learning the linguistic structure of queries and the temporal representation of videos.
1 code implementation • ACL 2020 • Meihan Tong, Bin Xu, Shuai Wang, Yixin Cao, Lei Hou, Juanzi Li, Jun Xie
Event Detection (ED) is a fundamental task in automatically structuring texts.
no code implementations • ACL 2020 • Yixin Cao, Ruihao Shui, Liangming Pan, Min-Yen Kan, Zhiyuan Liu, Tat-Seng Chua
The curse of knowledge can impede communication between experts and laymen.
1 code implementation • 12 Mar 2020 • Xiang Wang, Yaokun Xu, Xiangnan He, Yixin Cao, Meng Wang, Tat-Seng Chua
Properly handling missing data is a fundamental challenge in recommendation.
1 code implementation • IJCNLP 2019 • Chengjiang Li, Yixin Cao, Lei Hou, Jiaxin Shi, Juanzi Li, Tat-Seng Chua
Specifically, as for the knowledge embedding model, we utilize TransE to implicitly complete two KGs towards consistency and learn relational constraints between entities.
1 code implementation • IJCNLP 2019 • Yixin Cao, Zikun Hu, Tat-Seng Chua, Zhiyuan Liu, Heng Ji
Name tagging in low-resource languages or domains suffers from inadequate training data.
1 code implementation • ACL 2019 • Yixin Cao, Chengjiang Li, Zhiyuan Liu, Juanzi Li, Tat-Seng Chua
Entity alignment typically suffers from the issues of structural heterogeneity and limited seed alignments.
Ranked #30 on Entity Alignment on DBP15k zh-en
no code implementations • 12 Aug 2019 • Yunshan Ma, Xun Yang, Lizi Liao, Yixin Cao, Tat-Seng Chua
We unify three tasks of occasion, person and clothing discovery from multiple modalities of images, texts and metadata.
1 code implementation • 8 Jul 2019 • Jun Kuang, Yixin Cao, Jianbing Zheng, Xiangnan He, Ming Gao, Aoying Zhou
In contrast to existing distant supervision approaches that suffer from insufficient training corpora to extract relations, our proposal of mining implicit mutual relation from the massive unlabeled corpora transfers the semantic information of entity pairs into the RE model, which is more expressive and semantically plausible.
7 code implementations • 20 May 2019 • Xiang Wang, Xiangnan He, Yixin Cao, Meng Liu, Tat-Seng Chua
To provide more accurate, diverse, and explainable recommendation, it is compulsory to go beyond modeling user-item interactions and take side information into account.
Ranked #2 on Link Prediction on Yelp
1 code implementation • 17 Feb 2019 • Yixin Cao, Xiang Wang, Xiangnan He, Zikun Hu, Tat-Seng Chua
In this paper, we jointly learn the model of recommendation and knowledge graph completion.
Ranked #1 on Knowledge Graph Completion on MovieLens 1M
no code implementations • EMNLP 2018 • Yixin Cao, Lei Hou, Juanzi Li, Zhiyuan Liu, Chengjiang Li, Xu Chen, Tiansi Dong
Joint representation learning of words and entities benefits many NLP tasks, but has not been well explored in cross-lingual settings.
1 code implementation • COLING 2018 • Yixin Cao, Lei Hou, Juanzi Li, Zhiyuan Liu
To address this issue, we propose a novel neural model for collective entity linking, named as NCEL.
2 code implementations • 12 Nov 2018 • Xiang Wang, Dingxian Wang, Canran Xu, Xiangnan He, Yixin Cao, Tat-Seng Chua
Such connectivity not only reveals the semantics of entities and relations, but also helps to comprehend a user's interest.
no code implementations • 31 Jan 2018 • Xu Chen, Yongfeng Zhang, Hongteng Xu, Yixin Cao, Zheng Qin, Hongyuan Zha
By this, we can not only provide recommendation results to the users, but also tell the users why an item is recommended by providing intuitive visual highlights in a personalized manner.
no code implementations • IJCNLP 2017 • Yixin Cao, Jiaxin Shi, Juanzi Li, Zhiyuan Liu, Chengjiang Li
To enhance the expression ability of distributional word representation learning model, many researchers tend to induce word senses through clustering, and learn multiple embedding vectors for each word, namely multi-prototype word embedding model.
no code implementations • ACL 2017 • Yixin Cao, Lifu Huang, Heng Ji, Xu Chen, Juanzi Li
Integrating text and knowledge into a unified semantic space has attracted significant research interests recently.