1 code implementation • Findings (ACL) 2022 • Xin Lv, Yankai Lin, Yixin Cao, Lei Hou, Juanzi Li, Zhiyuan Liu, Peng Li, Jie zhou
In recent years, pre-trained language models (PLMs) have been shown to capture factual knowledge from massive texts, which encourages the proposal of PLM-based knowledge graph completion (KGC) models.
no code implementations • ACL 2022 • Yubo Ma, Zehao Wang, Mukai Li, Yixin Cao, Meiqi Chen, Xinze Li, Wenqi Sun, Kunquan Deng, Kun Wang, Aixin Sun, Jing Shao
Events are fundamental building blocks of real-world happenings.
1 code implementation • NAACL 2022 • Meihan Tong, Bin Xu, Shuai Wang, Meihuan Han, Yixin Cao, Jiangqi Zhu, Siyu Chen, Lei Hou, Juanzi Li
Event extraction aims to identify an event and then extract the arguments participating in the event.
1 code implementation • 3 Mar 2025 • Haowen Pan, Xiaozhi Wang, Yixin Cao, Zenglin Shi, Xun Yang, Juanzi Li, Meng Wang
Knowledge editing aims to update outdated information in Large Language Models (LLMs).
1 code implementation • 22 Jan 2025 • Yantao Liu, Zijun Yao, Rui Min, Yixin Cao, Lei Hou, Juanzi Li
To address this, we propose a Pairwise Reward Model (Pairwise RM) combined with a knockout tournament for BoN sampling.
1 code implementation • 27 Dec 2024 • Xinze Li, Yixin Cao, Yubo Ma, Aixin Sun
Extending context windows (i. e., Long Context, LC) and using retrievers to selectively access relevant information (i. e., Retrieval-Augmented Generation, RAG) are the two main strategies to enable LLMs to incorporate extremely long external contexts.
no code implementations • 19 Dec 2024 • Tao He, Lizi Liao, Yixin Cao, Yuanxing Liu, Yiheng Sun, Zerui Chen, Ming Liu, Bing Qin
It fully automates the process from mining policies in dialogue records to learning policy planning.
Hierarchical Reinforcement Learning
Reinforcement Learning (RL)
+1
no code implementations • 18 Dec 2024 • Wei Tang, Yixin Cao, Yang Deng, Jiahao Ying, Bo wang, Yizhe Yang, Yuyue Zhao, Qi Zhang, Xuanjing Huang, Yugang Jiang, Yong Liao
Knowledge utilization is a critical aspect of LLMs, and understanding how they adapt to evolving knowledge is essential for their effective deployment.
1 code implementation • 29 Nov 2024 • Zhihao Sun, Haoran Jiang, Haoran Chen, Yixin Cao, Xipeng Qiu, Zuxuan Wu, Yu-Gang Jiang
Moreover, we construct the ForgeryAnalysis dataset through the Chain-of-Clues prompt, which includes analysis and reasoning text to upgrade the image manipulation detection task.
1 code implementation • 30 Oct 2024 • Shihan Dou, Jiazheng Zhang, Jianxiang Zang, Yunbo Tao, Weikang Zhou, Haoxiang Jia, Shichun Liu, Yuming Yang, Zhiheng Xi, Shenxi Wu, Shaoqing Zhang, Muling Wu, Changze Lv, Limao Xiong, WenYu Zhan, Lin Zhang, Rongxiang Weng, Jingang Wang, Xunliang Cai, Yueming Wu, Ming Wen, Rui Zheng, Tao Ji, Yixin Cao, Tao Gui, Xipeng Qiu, Qi Zhang, Xuanjing Huang
We introduce MPLSandbox, an out-of-the-box multi-programming language sandbox designed to provide unified and comprehensive feedback from compiler and analysis tools for Large Language Models (LLMs).
1 code implementation • 21 Oct 2024 • Yantao Liu, Zijun Yao, Rui Min, Yixin Cao, Lei Hou, Juanzi Li
However, this approach fails to assess reward models on subtle but critical content changes and variations in style, resulting in a low correlation with policy model performance.
no code implementations • 18 Oct 2024 • Songheng Zhang, Lei Wang, Toby Jia-Jun Li, Qiaomu Shen, Yixin Cao, Yong Wang
It consists of two major modules: tabular data inference and expressive chart generation.
1 code implementation • 4 Oct 2024 • Haibo Wang, Zhiyang Xu, Yu Cheng, Shizhe Diao, Yufan Zhou, Yixin Cao, Qifan Wang, Weifeng Ge, Lifu Huang
Video Large Language Models (Video-LLMs) have demonstrated remarkable capabilities in coarse-grained video understanding, however, they struggle with fine-grained temporal grounding.
1 code implementation • 30 Sep 2024 • Changyi Xiao, Yixin Cao
The core concept of CKGC is to map the values of predictions of KGC models to the range [0, 1], ensuring that values associated with true facts are close to 1, while values linked to false facts are close to 0.
1 code implementation • 30 Sep 2024 • Changyi Xiao, Xiangnan He, Yixin Cao
To reflect uncertainty, we first embed entities/relations as permutations of a set of random variables.
no code implementations • 25 Sep 2024 • Zehao Wang, Minye Wu, Yixin Cao, Yubo Ma, Meiqi Chen, Tinne Tuytelaars
The framework is structured around the context-free grammar (CFG) of the task.
1 code implementation • 19 Sep 2024 • Jin Jiang, Yuchen Yan, Yang Liu, Yonggang Jin, Shuai Peng, Mengdi Zhang, Xunliang Cai, Yixin Cao, Liangcai Gao, Zhi Tang
In this paper, we present a novel approach, called LogicPro, to enhance Large Language Models (LLMs) complex Logical reasoning through Program Examples.
no code implementations • 3 Sep 2024 • Yuchen Yan, Jin Jiang, Yang Liu, Yixin Cao, Xin Xu, Mengdi Zhang, Xunliang Cai, Jian Shao
To the best of our knowledge, we are the first to introduce the spontaneous step-level self-correction ability of LLMs in mathematical reasoning.
no code implementations • 21 Aug 2024 • Kai Xiong, Xiao Ding, Li Du, Jiahao Ying, Ting Liu, Bing Qin, Yixin Cao
This makes it a challenge to diagnose and remedy the deficiencies of LLMs through rich label-free user queries.
1 code implementation • 1 Jul 2024 • Yubo Ma, Yuhang Zang, Liangyu Chen, Meiqi Chen, Yizhu Jiao, Xinze Li, Xinyuan Lu, Ziyu Liu, Yan Ma, Xiaoyi Dong, Pan Zhang, Liangming Pan, Yu-Gang Jiang, Jiaqi Wang, Yixin Cao, Aixin Sun
Moreover, 33. 2% of the questions are cross-page questions requiring evidence across multiple pages.
no code implementations • 29 Jun 2024 • Jiahao Ying, Mingbao Lin, Yixin Cao, Wei Tang, Bo wang, Qianru Sun, Xuanjing Huang, Shuicheng Yan
Inspired by the theory of "Learning from Errors", this framework employs an instructor LLM to meticulously analyze the specific errors within a target model, facilitating targeted and efficient training cycles.
1 code implementation • 24 Jun 2024 • Jiangshu Du, Yibo Wang, Wenting Zhao, Zhongfen Deng, Shuaiqi Liu, Renze Lou, Henry Peng Zou, Pranav Narayanan Venkit, Nan Zhang, Mukund Srinath, Haoran Ranran Zhang, Vipul Gupta, Yinghui Li, Tao Li, Fei Wang, Qin Liu, Tianlin Liu, Pengzhi Gao, Congying Xia, Chen Xing, Jiayang Cheng, Zhaowei Wang, Ying Su, Raj Sanjay Shah, Ruohao Guo, Jing Gu, Haoran Li, Kangda Wei, ZiHao Wang, Lu Cheng, Surangika Ranathunga, Meng Fang, Jie Fu, Fei Liu, Ruihong Huang, Eduardo Blanco, Yixin Cao, Rui Zhang, Philip S. Yu, Wenpeng Yin
This study focuses on the topic of LLMs assist NLP Researchers, particularly examining the effectiveness of LLM in assisting paper (meta-)reviewing and its recognizability.
1 code implementation • 19 Jun 2024 • Bo wang, Heyan Huang, Yixin Cao, Jiahao Ying, Wei Tang, Chong Feng
While large language models (LLMs) have made notable advancements in natural language processing, they continue to struggle with processing extensive text.
1 code implementation • 8 Jun 2024 • Tao He, Lizi Liao, Yixin Cao, Yuanxing Liu, Ming Liu, Zerui Chen, Bing Qin
In proactive dialogue, the challenge lies not just in generating responses but in steering conversations toward predetermined goals, a task where Large Language Models (LLMs) typically struggle due to their reactive nature.
no code implementations • 6 Jun 2024 • Wei Tang, Yixin Cao, Jiahao Ying, Bo wang, Yuyue Zhao, Yong Liao, Pengyuan Zhou
In this paper, we formalize a general "A + B" framework with varying combinations of foundation models and types for systematic investigation.
1 code implementation • 4 Jun 2024 • Zhihan Zhang, Yixin Cao, Chenchen Ye, Yunshan Ma, Lizi Liao, Tat-Seng Chua
We refer to the complex events composed of many news articles over an extended period as Temporal Complex Event (TCE).
1 code implementation • 24 Apr 2024 • Vicente Balmaseda, Ying Xu, Yixin Cao, Nate Veldt
Cluster deletion is an NP-hard graph clustering objective with applications in computational biology and social network analysis, where the goal is to delete a minimum number of edges to partition a graph into cliques.
1 code implementation • 27 Mar 2024 • Meiqi Chen, Yixin Cao, Yan Zhang, Chaochao Lu
Within this framework, we conduct an in-depth causal analysis to assess the causal effect of these biases on MLLM predictions.
1 code implementation • 14 Mar 2024 • Kai Xiong, Xiao Ding, Ting Liu, Bing Qin, Dongliang Xu, Qing Yang, Hongtao Liu, Yixin Cao
The results show that our approach not only boosts the general reasoning performance of LLMs but also makes considerable strides towards their capacity for abstract reasoning, moving beyond simple memorization or imitation to a more nuanced understanding and application of generic facts.
1 code implementation • 20 Feb 2024 • Hao Peng, Xiaozhi Wang, Chunyang Li, Kaisheng Zeng, Jiangshan Duo, Yixin Cao, Lei Hou, Juanzi Li
However, natural knowledge updates in the real world come from the occurrences of new events rather than direct changes in factual triplets.
no code implementations • 19 Feb 2024 • Jiahao Ying, Yixin Cao, Yushi Bai, Qianru Sun, Bo wang, Wei Tang, Zhaojun Ding, Yizhe Yang, Xuanjing Huang, Shuicheng Yan
There are two updating strategies: 1) mimicking strategy to generate similar samples based on original data, preserving stylistic and contextual essence, and 2) extending strategy that further expands existing samples at varying cognitive levels by adapting Bloom's taxonomy of educational objectives.
no code implementations • 18 Feb 2024 • Yubo Ma, Zhibin Gou, Junheng Hao, Ruochen Xu, Shuohang Wang, Liangming Pan, Yujiu Yang, Yixin Cao, Aixin Sun, Hany Awadalla, Weizhu Chen
To make this task more practical and solvable for LLMs, we introduce a new task setting named tool-augmented scientific reasoning.
no code implementations • 17 Dec 2023 • Wei Tang, Zhiqian Wu, Yixin Cao, Yong Liao, Pengyuan Zhou
As such, the aggregated language model can leverage complementary knowledge from multilingual KGs without demanding raw user data sharing.
1 code implementation • 2 Dec 2023 • Yunshan Ma, Chenchen Ye, Zijian Wu, Xiang Wang, Yixin Cao, Liang Pang, Tat-Seng Chua
Temporal complex event forecasting aims to predict the future events given the observed events from history.
no code implementations • 16 Nov 2023 • Yuhan Sun, Mukai Li, Yixin Cao, Kun Wang, Wenxiao Wang, Xingyu Zeng, Rui Zhao
In response, we introduce ControlPE (Continuously Controllable Prompt Engineering).
no code implementations • 15 Nov 2023 • Minqian Liu, Ying Shen, Zhiyang Xu, Yixin Cao, Eunah Cho, Vaibhav Kumar, Reza Ghanadan, Lifu Huang
Natural Language Generation (NLG) typically involves evaluating the generated text in various aspects (e. g., consistency and naturalness) to obtain a comprehensive assessment.
1 code implementation • 13 Nov 2023 • Haowen Pan, Yixin Cao, Xiaozhi Wang, Xun Yang, Meng Wang
Understanding the internal mechanisms by which multi-modal large language models (LLMs) interpret different modalities and integrate cross-modal representations is becoming increasingly critical for continuous improvements in both academia and industry.
1 code implementation • 19 Oct 2023 • Zhiyuan Liu, Sihang Li, Yanchen Luo, Hao Fei, Yixin Cao, Kenji Kawaguchi, Xiang Wang, Tat-Seng Chua
MolCA enables an LM (e. g., Galactica) to understand both text- and graph-based molecular contents via the cross-modal projector.
Ranked #7 on
Molecule Captioning
on ChEBI-20
1 code implementation • 18 Oct 2023 • Ruihao Shui, Yixin Cao, Xiang Wang, Tat-Seng Chua
Large language models (LLMs) have demonstrated great potential for domain-specific applications, such as the law domain.
1 code implementation • 13 Oct 2023 • Meiqi Chen, Yubo Ma, Kaitao Song, Yixin Cao, Yan Zhang, Dongsheng Li
More in detail, we first investigate the deficiencies of LLMs in logical reasoning across different tasks.
1 code implementation • 9 Oct 2023 • Xinze Li, Yixin Cao, Liangming Pan, Yubo Ma, Aixin Sun
Although achieving great success, Large Language Models (LLMs) usually suffer from unreliable hallucinations.
no code implementations • 29 Sep 2023 • Jiahao Ying, Yixin Cao, Kai Xiong, Yidong He, Long Cui, Yongbin Liu
Drawing on cognitive theory, we target the first scenario of decision-making styles where there is no superiority in the conflict and categorize LLMs' preference into dependent, intuitive, and rational/irrational styles.
no code implementations • 10 Sep 2023 • Yan Meng, Liangming Pan, Yixin Cao, Min-Yen Kan
We introduce the task of real-world information-seeking follow-up question generation (FQG), which aims to generate follow-up questions seeking a more in-depth understanding of an initial question and answer.
1 code implementation • 12 Aug 2023 • Yunshan Ma, Chenchen Ye, Zijian Wu, Xiang Wang, Yixin Cao, Tat-Seng Chua
The task of event forecasting aims to model the relational and temporal patterns based on historical events and makes forecasting to what will happen in the future.
no code implementations • 9 Aug 2023 • Yu Zhao, Hao Fei, Yixin Cao, Bobo Li, Meishan Zhang, Jianguo Wei, Min Zhang, Tat-Seng Chua
A scene-event mapping mechanism is first designed to bridge the gap between the underlying scene structure and the high-level event semantic structure, resulting in an overall hierarchical scene-event (termed ICE) graph structure.
no code implementations • 29 Jun 2023 • Tao He, Ming Liu, Yixin Cao, Zekun Wang, Zihao Zheng, Zheng Chu, Bing Qin
The proposed approach comprises two main components: a GNN-based predictor and a reasoning path distiller.
no code implementations • 23 May 2023 • Moxin Li, Wenjie Wang, Fuli Feng, Yixin Cao, Jizhi Zhang, Tat-Seng Chua
In this light, we propose a new problem of robust prompt optimization for LLMs against distribution shifts, which requires the prompt optimized over the labeled source group can simultaneously generalize to an unlabeled target group.
1 code implementation • 19 May 2023 • Kai Xiong, Xiao Ding, Yixin Cao, Ting Liu, Bing Qin
Through extensive experiments on various datasets, LLMs can effectively collaborate to reach a consensus despite noticeable inter-inconsistencies, but imbalances in their abilities can lead to domination by superior LLMs.
1 code implementation • 19 May 2023 • Shengqiong Wu, Hao Fei, Yixin Cao, Lidong Bing, Tat-Seng Chua
First, we represent the fine-grained semantic structures of the input image and text with the visual and textual scene graphs, which are further fused into a unified cross-modal graph (CMG).
1 code implementation • 18 May 2023 • Xinze Li, Yixin Cao, Muhao Chen, Aixin Sun
Goal-oriented Script Generation is a new task of generating a list of steps that can fulfill the given goal.
1 code implementation • 3 May 2023 • Yubo Ma, Zehao Wang, Yixin Cao, Aixin Sun
Few-shot event detection (ED) has been widely studied, while this brings noticeable discrepancies, e. g., various motivations, tasks, and experimental settings, that hinder the understanding of models for future progress. This paper presents a thorough empirical study, a unified view of ED models, and a better unified baseline.
1 code implementation • 15 Mar 2023 • Yubo Ma, Yixin Cao, YongChing Hong, Aixin Sun
Large Language Models (LLMs) have made remarkable strides in various tasks.
1 code implementation • 22 Oct 2022 • Hao Wang, Yixin Cao, Yangguang Li, Zhen Huang, Kun Wang, Jing Shao
Document-level natural language inference (DOCNLI) is a new challenging task in natural language processing, aiming at judging the entailment relationship between a pair of hypothesis and premise documents.
1 code implementation • 11 Oct 2022 • Linhai Zhuo, Yuqian Fu, Jingjing Chen, Yixin Cao, Yu-Gang Jiang
The proposed TGDM framework contains a Mixup-3T network for learning classifiers and a dynamic ratio generation network (DRGN) for learning the optimal mix ratio.
no code implementations • 14 Jul 2022 • Weijian Chen, Yixin Cao, Fuli Feng, Xiangnan He, Yongdong Zhang
On the one hand, their performance will dramatically degrade along with the increasing sparsity of KGs.
no code implementations • 4 Jul 2022 • Tao He, Ming Liu, Yixin Cao, Tianwen Jiang, Zihao Zheng, Jingrun Zhang, Sendong Zhao, Bing Qin
In this paper, we solve the sparse KGC from these two motivations simultaneously and handle their respective drawbacks further, and propose a plug-and-play unified framework VEM$^2$L over sparse KGs.
no code implementations • COLING 2022 • Meiqi Chen, Yixin Cao, Kunquan Deng, Mukai Li, Kun Wang, Jing Shao, Yan Zhang
In this paper, we propose a novel Event Relational Graph TransfOrmer (ERGO) framework for DECI, which improves existing state-of-the-art (SOTA) methods upon two aspects.
1 code implementation • ACL 2022 • Yubo Ma, Zehao Wang, Yixin Cao, Mukai Li, Meiqi Chen, Kun Wang, Jing Shao
We have conducted extensive experiments on three benchmarks, including both sentence- and document-level EAE.
3 code implementations • 16 Feb 2022 • Shumin Deng, Yubo Ma, Ningyu Zhang, Yixin Cao, Bryan Hooi
Information Extraction (IE) seeks to derive structured information from unstructured texts, often facing challenges in low-resource scenarios due to data scarcity and unseen classes.
no code implementations • 18 Jan 2022 • Li Lin, Yixin Cao, Lifu Huang, Shu'ang Li, Xuming Hu, Lijie Wen, Jianmin Wang
To alleviate the knowledge forgetting issue, we design two modules, Im and Gm, for each type of knowledge, which are combined via prompt tuning.
no code implementations • 17 Jan 2022 • Kaisheng Zeng, Zhenhao Dong, Lei Hou, Yixin Cao, Minghao Hu, Jifan Yu, Xin Lv, Juanzi Li, Ling Feng
Self-supervised entity alignment (EA) aims to link equivalent entities across different knowledge graphs (KGs) without seed alignments.
1 code implementation • 14 Jan 2022 • Zhiyuan Liu, Yixin Cao, Fuli Feng, Xiang Wang, Jie Tang, Kenji Kawaguchi, Tat-Seng Chua
We present a framework of Training Free Graph Matching (TFGM) to boost the performance of Graph Neural Networks (GNNs) based graph matching, providing a fast promising solution without training (training-free).
no code implementations • 29 Sep 2021 • Changyi Xiao, Xiangnan He, Yixin Cao
Based on the general form, we show the principles of model design to satisfy logical rules.
1 code implementation • ACL 2021 • Yixin Cao, Xiang Ji, Xin Lv, Juanzi Li, Yonggang Wen, Hanwang Zhang
We present InferWiki, a Knowledge Graph Completion (KGC) dataset that improves upon existing benchmarks in inferential ability, assumptions, and patterns.
1 code implementation • ACL 2021 • Zikun Hu, Yixin Cao, Lifu Huang, Tat-Seng Chua
In this paper, we contribute a dataset and propose a paradigm to quantitatively evaluate the effect of attention and KG on bag-level relation extraction (RE).
1 code implementation • 26 Jul 2021 • Zikun Hu, Yixin Cao, Lifu Huang, Tat-Seng Chua
In this paper, we contribute a dataset and propose a paradigm to quantitatively evaluate the effect of attention and KG on bag-level relation extraction (RE).
1 code implementation • ACL 2021 • Meihan Tong, Shuai Wang, Bin Xu, Yixin Cao, Minghui Liu, Lei Hou, Juanzi Li
Few-shot Named Entity Recognition (NER) exploits only a handful of annotations to identify and classify named entity mentions.
1 code implementation • EMNLP 2021 • Xin Lv, Yixin Cao, Lei Hou, Juanzi Li, Zhiyuan Liu, Yichi Zhang, Zelin Dai
However, we find in experiments that many paths given by these models are actually unreasonable, while little works have been done on interpretability evaluation for them.
no code implementations • 1 Feb 2021 • Yixin Cao, Chuanwei Zou, Xianfeng Cheng
Flash Loan attack can grab millions of dollars from decentralized vaults in one single transaction, drawing increasing attention from the Decentralized Finance (DeFi) players.
1 code implementation • 27 Nov 2020 • Yixin Cao, Jun Kuang, Ming Gao, Aoying Zhou, Yonggang Wen, Tat-Seng Chua
In this paper, we propose a general approach to learn relation prototypesfrom unlabeled texts, to facilitate the long-tail relation extraction by transferring knowledge from the relation types with sufficient trainingdata.
1 code implementation • EMNLP 2020 • Yixin Cao, Liangming Pan, Juanzi Li, Zhiyuan Liu, Tat-Seng Chua
GNN-based EA methods present promising performances by modeling the KG structure defined by relation triples.
no code implementations • 6 Jul 2020 • Xun Yang, Jianfeng Dong, Yixin Cao, Xun Wang, Meng Wang, Tat-Seng Chua
To facilitate video retrieval with complex queries, we propose a Tree-augmented Cross-modal Encoding method by jointly learning the linguistic structure of queries and the temporal representation of videos.
1 code implementation • ACL 2020 • Meihan Tong, Bin Xu, Shuai Wang, Yixin Cao, Lei Hou, Juanzi Li, Jun Xie
Event Detection (ED) is a fundamental task in automatically structuring texts.
no code implementations • ACL 2020 • Yixin Cao, Ruihao Shui, Liangming Pan, Min-Yen Kan, Zhiyuan Liu, Tat-Seng Chua
The curse of knowledge can impede communication between experts and laymen.
1 code implementation • 12 Mar 2020 • Xiang Wang, Yaokun Xu, Xiangnan He, Yixin Cao, Meng Wang, Tat-Seng Chua
Properly handling missing data is a fundamental challenge in recommendation.
1 code implementation • IJCNLP 2019 • Chengjiang Li, Yixin Cao, Lei Hou, Jiaxin Shi, Juanzi Li, Tat-Seng Chua
Specifically, as for the knowledge embedding model, we utilize TransE to implicitly complete two KGs towards consistency and learn relational constraints between entities.
1 code implementation • ACL 2019 • Yixin Cao, Chengjiang Li, Zhiyuan Liu, Juanzi Li, Tat-Seng Chua
Entity alignment typically suffers from the issues of structural heterogeneity and limited seed alignments.
Ranked #30 on
Entity Alignment
on DBP15k zh-en
1 code implementation • IJCNLP 2019 • Yixin Cao, Zikun Hu, Tat-Seng Chua, Zhiyuan Liu, Heng Ji
Name tagging in low-resource languages or domains suffers from inadequate training data.
no code implementations • 12 Aug 2019 • Yunshan Ma, Xun Yang, Lizi Liao, Yixin Cao, Tat-Seng Chua
We unify three tasks of occasion, person and clothing discovery from multiple modalities of images, texts and metadata.
1 code implementation • 8 Jul 2019 • Jun Kuang, Yixin Cao, Jianbing Zheng, Xiangnan He, Ming Gao, Aoying Zhou
In contrast to existing distant supervision approaches that suffer from insufficient training corpora to extract relations, our proposal of mining implicit mutual relation from the massive unlabeled corpora transfers the semantic information of entity pairs into the RE model, which is more expressive and semantically plausible.
7 code implementations • 20 May 2019 • Xiang Wang, Xiangnan He, Yixin Cao, Meng Liu, Tat-Seng Chua
To provide more accurate, diverse, and explainable recommendation, it is compulsory to go beyond modeling user-item interactions and take side information into account.
Ranked #2 on
Link Prediction
on Yelp
1 code implementation • 17 Feb 2019 • Yixin Cao, Xiang Wang, Xiangnan He, Zikun Hu, Tat-Seng Chua
In this paper, we jointly learn the model of recommendation and knowledge graph completion.
Ranked #1 on
Knowledge Graph Completion
on MovieLens 1M
no code implementations • EMNLP 2018 • Yixin Cao, Lei Hou, Juanzi Li, Zhiyuan Liu, Chengjiang Li, Xu Chen, Tiansi Dong
Joint representation learning of words and entities benefits many NLP tasks, but has not been well explored in cross-lingual settings.
1 code implementation • COLING 2018 • Yixin Cao, Lei Hou, Juanzi Li, Zhiyuan Liu
To address this issue, we propose a novel neural model for collective entity linking, named as NCEL.
2 code implementations • 12 Nov 2018 • Xiang Wang, Dingxian Wang, Canran Xu, Xiangnan He, Yixin Cao, Tat-Seng Chua
Such connectivity not only reveals the semantics of entities and relations, but also helps to comprehend a user's interest.
no code implementations • 31 Jan 2018 • Xu Chen, Yongfeng Zhang, Hongteng Xu, Yixin Cao, Zheng Qin, Hongyuan Zha
By this, we can not only provide recommendation results to the users, but also tell the users why an item is recommended by providing intuitive visual highlights in a personalized manner.
no code implementations • IJCNLP 2017 • Yixin Cao, Jiaxin Shi, Juanzi Li, Zhiyuan Liu, Chengjiang Li
To enhance the expression ability of distributional word representation learning model, many researchers tend to induce word senses through clustering, and learn multiple embedding vectors for each word, namely multi-prototype word embedding model.
no code implementations • ACL 2017 • Yixin Cao, Lifu Huang, Heng Ji, Xu Chen, Juanzi Li
Integrating text and knowledge into a unified semantic space has attracted significant research interests recently.