1 code implementation • Findings (NAACL) 2022 • Xiang Chen, Ningyu Zhang, Lei LI, Yunzhi Yao, Shumin Deng, Chuanqi Tan, Fei Huang, Luo Si, Huajun Chen
Multimodal named entity recognition and relation extraction (MNER and MRE) is a fundamental and crucial branch in information extraction.
1 code implementation • COLING 2022 • Zezhong Xu, Peng Ye, Hui Chen, Meng Zhao, Huajun Chen, Wen Zhang
Based on this idea, we propose a transformer-based rule mining approach, Ruleformer.
no code implementations • 5 Dec 2023 • Huajun Chen
Humankind's understanding of the world is fundamentally linked to our perception and cognition, with \emph{human languages} serving as one of the major carriers of \emph{world knowledge}.
1 code implementation • 11 Nov 2023 • Yichi Zhang, Zhuo Chen, Yin Fang, Lei Cheng, Yanxi Lu, Fangming Li, Wen Zhang, Huajun Chen
Besides, we design a new alignment objective to align the LLM preference with human preference, aiming to train a better LLM for real-scenario domain-specific QA to generate reliable and user-friendly answers.
no code implementations • 19 Oct 2023 • Zhiwei Huang, Long Jin, Junjie Wang, Mingchen Tu, Yin Hua, Zhiqiang Liu, Jiawei Meng, Huajun Chen, Wen Zhang
To address this need, we have developed the ConferenceQA dataset for 7 diverse academic conferences with human annotations.
1 code implementation • 18 Oct 2023 • Xiang Chen, Duanzheng Song, Honghao Gui, Chengxi Wang, Ningyu Zhang, Fei Huang, Chengfei Lv, Dan Zhang, Huajun Chen
Large Language Models (LLMs), such as ChatGPT/GPT-4, have garnered widespread attention owing to their myriad of practical applications, yet their adoption has been constrained by issues of fact-conflicting hallucinations across web platforms.
1 code implementation • 12 Oct 2023 • Siyuan Cheng, Bozhong Tian, Qingbin Liu, Xi Chen, Yongheng Wang, Huajun Chen, Ningyu Zhang
In this paper, we focus on editing Multimodal Large Language Models (MLLMs).
1 code implementation • 10 Oct 2023 • Yichi Zhang, Zhuo Chen, Wen Zhang, Huajun Chen
In this paper, we discuss how to incorporate the helpful KG structural information into the LLMs, aiming to achieve structrual-aware reasoning in the LLMs.
no code implementations • 5 Oct 2023 • Zeyuan Wang, Qiang Zhang, Keyan Ding, Ming Qin, Xiang Zhuang, Xiaotong Li, Huajun Chen
To address this challenge, we propose InstructProtein, an innovative LLM that possesses bidirectional generation capabilities in both human and protein languages: (i) taking a protein sequence as input to predict its textual function description and (ii) using natural language to prompt protein sequence generation.
1 code implementation • 3 Oct 2023 • Zhen Bi, Ningyu Zhang, Yida Xue, Yixin Ou, Daxiong Ji, Guozhou Zheng, Huajun Chen
Ocean science, which delves into the oceans that are reservoirs of life and biodiversity, is of great significance given that oceans cover over 70% of our planet's surface.
1 code implementation • 3 Oct 2023 • Zhoubo Li, Ningyu Zhang, Yunzhi Yao, Mengru Wang, Xi Chen, Huajun Chen
This paper pioneers the investigation into the potential pitfalls associated with knowledge editing for LLMs.
1 code implementation • 3 Oct 2023 • Shengyu Mao, Ningyu Zhang, Xiaohan Wang, Mengru Wang, Yunzhi Yao, Yong Jiang, Pengjun Xie, Fei Huang, Huajun Chen
This task seeks to adjust the models' responses to opinion-related questions on specified topics since an individual's personality often manifests in the form of their expressed opinions, thereby showcasing different personality traits.
1 code implementation • 14 Sep 2023 • Wangchunshu Zhou, Yuchen Eleanor Jiang, Long Li, Jialong Wu, Tiannan Wang, Shi Qiu, Jintian Zhang, Jing Chen, Ruipu Wu, Shuai Wang, Shiding Zhu, Jiyu Chen, Wentao Zhang, Ningyu Zhang, Huajun Chen, Peng Cui, Mrinmaya Sachan
Recent advances on large language models (LLMs) enable researchers and developers to build autonomous language agents that can automatically solve various tasks and interact with environments, humans, and other agents using natural language interfaces.
1 code implementation • 29 Aug 2023 • Zhen Bi, Ningyu Zhang, Yinuo Jiang, Shumin Deng, Guozhou Zheng, Huajun Chen
Although there are effective methods like program-of-thought prompting for LLMs which uses programming language to tackle complex reasoning tasks, the specific impact of code data on the improvement of reasoning capabilities remains under-explored.
1 code implementation • 15 Aug 2023 • Long Jin, Zhen Yao, Mingyang Chen, Huajun Chen, Wen Zhang
Though KGE models' capabilities are analyzed over different relational patterns in theory and a rough connection between better relational patterns modeling and better performance of KGC has been built, a comprehensive quantitative analysis on KGE models over relational patterns remains absent so it is uncertain how the theoretical support of KGE to a relational pattern contributes to the performance of triples associated to such a relational pattern.
1 code implementation • 14 Aug 2023 • Peng Wang, Ningyu Zhang, Xin Xie, Yunzhi Yao, Bozhong Tian, Mengru Wang, Zekun Xi, Siyuan Cheng, Kangwei Liu, Guozhou Zheng, Huajun Chen
Large Language Models (LLMs) usually suffer from knowledge cutoff or fallacy issues, which means they are unaware of unseen events or generate text with incorrect facts owing to the outdated/noisy data.
1 code implementation • 30 Jul 2023 • Zhuo Chen, Lingbing Guo, Yin Fang, Yichi Zhang, Jiaoyan Chen, Jeff Z. Pan, Yangning Li, Huajun Chen, Wen Zhang
As a crucial extension of entity alignment (EA), multi-modal entity alignment (MMEA) aims to identify identical entities across disparate knowledge graphs (KGs) by exploiting associated visual information.
Ranked #1 on
Multi-modal Entity Alignment
on UMVM-oea-d-w-v2
(using extra training data)
1 code implementation • 29 Jun 2023 • Xiang Zhuang, Qiang Zhang, Bin Wu, Keyan Ding, Yin Fang, Huajun Chen
To effectively utilize many-to-many correlations of molecules and properties, we propose a Graph Sampling-based Meta-learning (GS-Meta) framework for few-shot molecular property prediction.
1 code implementation • 13 Jun 2023 • Yin Fang, Xiaozhuan Liang, Ningyu Zhang, Kangwei Liu, Rui Huang, Zhuo Chen, Xiaohui Fan, Huajun Chen
Large Language Models (LLMs), with their remarkable task-handling capabilities and innovative outputs, have catalyzed significant advancements across a spectrum of fields.
Catalytic activity prediction
Chemical-Disease Interaction Extraction
+14
1 code implementation • 24 May 2023 • Lingbing Guo, Weiqing Wang, Zhuo Chen, Ningyu Zhang, Zequn Sun, Yixuan Lai, Qiang Zhang, Huajun Chen
Reasoning system dynamics is one of the most important analytical approaches for many scientific studies.
2 code implementations • 24 May 2023 • Lingbing Guo, Zhuo Chen, Jiaoyan Chen, Huajun Chen
We then reveal that their incomplete objective limits the capacity on both entity alignment and entity synthesis (i. e., generating new entities).
1 code implementation • 22 May 2023 • Yuqi Zhu, Xiaohan Wang, Jing Chen, Shuofei Qiao, Yixin Ou, Yunzhi Yao, Shumin Deng, Huajun Chen, Ningyu Zhang
This paper presents an exhaustive quantitative and qualitative evaluation of Large Language Models (LLMs) for Knowledge Graph (KG) construction and reasoning.
2 code implementations • 22 May 2023 • Yunzhi Yao, Peng Wang, Bozhong Tian, Siyuan Cheng, Zhoubo Li, Shumin Deng, Huajun Chen, Ningyu Zhang
Our objective is to provide valuable insights into the effectiveness and feasibility of each editing technique, thereby assisting the community in making informed decisions on the selection of the most appropriate method for a specific task or context.
3 code implementations • 22 May 2023 • Shuofei Qiao, Honghao Gui, Huajun Chen, Ningyu Zhang
Tools serve as pivotal interfaces that enable humans to understand and reshape the world.
1 code implementation • 15 May 2023 • Hongbin Ye, Honghao Gui, Xin Xu, Xi Chen, Huajun Chen, Ningyu Zhang
This necessitates a system that can handle evolving schema automatically to extract information for KGC.
1 code implementation • 15 May 2023 • Yunzhi Yao, Peng Wang, Shengyu Mao, Chuanqi Tan, Fei Huang, Huajun Chen, Ningyu Zhang
Previous studies have revealed that vanilla pre-trained language models (PLMs) lack the capacity to handle knowledge-intensive NLP tasks alone; thus, several works have attempted to integrate external knowledge into PLMs.
1 code implementation • 15 May 2023 • Xiang Chen, Ningyu Zhang, Jintian Zhang, Xiaohan Wang, Tongtong Wu, Xi Chen, Yongheng Wang, Huajun Chen
Multimodal Knowledge Graph Construction (MKGC) involves creating structured representations of entities and relations using multiple modalities, such as text and images.
1 code implementation • 28 Apr 2023 • Wen Zhang, Zhen Yao, Mingyang Chen, Zhiwei Huang, Huajun Chen
Since the dynamic characteristics of knowledge graphs, many inductive knowledge graph representation learning (KGRL) works have been proposed in recent years, focusing on enabling prediction over new entities.
2 code implementations • 18 Apr 2023 • Zhen Bi, Jing Chen, Yinuo Jiang, Feiyu Xiong, Wei Guo, Huajun Chen, Ningyu Zhang
However, large generative language model trained on structured data such as code has demonstrated impressive capability in understanding natural language for structural prediction and reasoning tasks.
1 code implementation • 3 Mar 2023 • Wen Zhang, Yushan Zhu, Mingyang Chen, Yuxia Geng, Yufeng Huang, Yajing Xu, Wenting Song, Huajun Chen
Through experiments, we justify that the pretrained KGTransformer could be used off the shelf as a general and effective KRF module across KG-related tasks.
1 code implementation • 3 Feb 2023 • Mingyang Chen, Wen Zhang, Zhen Yao, Yushan Zhu, Yang Gao, Jeff Z. Pan, Huajun Chen
In our proposed model, Entity-Agnostic Representation Learning (EARL), we only learn the embeddings for a small set of entities and refer to them as reserved entities.
no code implementations • 3 Feb 2023 • Mingyang Chen, Wen Zhang, Yuxia Geng, Zezhong Xu, Jeff Z. Pan, Huajun Chen
In this paper, we use a set of general terminologies to unify these methods and refer to them as Knowledge Extrapolation.
1 code implementation • 26 Jan 2023 • Yin Fang, Ningyu Zhang, Zhuo Chen, Lingbing Guo, Xiaohui Fan, Huajun Chen
However, despite the potential of language models in molecule generation, they face numerous challenges such as the generation of syntactically or chemically flawed molecules, narrow domain focus, and limitations in creating diverse and directionally feasible molecules due to a dearth of annotated data or external molecular databases.
2 code implementations • 25 Jan 2023 • Xiang Chen, Lei LI, Shuofei Qiao, Ningyu Zhang, Chuanqi Tan, Yong Jiang, Fei Huang, Huajun Chen
Previous typical solutions mainly obtain a NER model by pre-trained language models (PLMs) with data from a rich-resource domain and adapt it to the target domain.
2 code implementations • 25 Jan 2023 • Siyuan Cheng, Ningyu Zhang, Bozhong Tian, Xi Chen, Qingbing Liu, Huajun Chen
To address this issue, we propose a new task of editing language model-based KG embeddings in this paper.
1 code implementation • 3 Jan 2023 • Zhen Yao, Wen Zhang, Mingyang Chen, Yufeng Huang, Yi Yang, Huajun Chen
And in AnKGE, we train an analogy function for each level of analogical inference with the original element embedding from a well-trained KGE model as input, which outputs the analogical object embedding.
1 code implementation • 29 Dec 2022 • Zhuo Chen, Jiaoyan Chen, Wen Zhang, Lingbing Guo, Yin Fang, Yufeng Huang, Yichi Zhang, Yuxia Geng, Jeff Z. Pan, Wenting Song, Huajun Chen
Multi-modal entity alignment (MMEA) aims to discover identical entities across different knowledge graphs (KGs) whose entities are associated with relevant images.
Ranked #1 on
Entity Alignment
on FBYG15k
(using extra training data)
2 code implementations • 19 Dec 2022 • Shuofei Qiao, Yixin Ou, Ningyu Zhang, Xiang Chen, Yunzhi Yao, Shumin Deng, Chuanqi Tan, Fei Huang, Huajun Chen
Reasoning, as an essential ability for complex problem-solving, can provide back-end support for various real-world applications, such as medical diagnosis, negotiation, etc.
2 code implementations • 14 Nov 2022 • Lei LI, Xiang Chen, Shuofei Qiao, Feiyu Xiong, Huajun Chen, Ningyu Zhang
Multimodal relation extraction is an essential task for knowledge graph construction.
1 code implementation • 23 Oct 2022 • Hongbin Ye, Ningyu Zhang, Hui Chen, Huajun Chen
Our contributions are threefold: (1) We present a detailed, complete taxonomy for the generative KGC methods; (2) We provide a theoretical and empirical analysis of the generative KGC methods; (3) We propose several research directions that can be developed in the future.
1 code implementation • 20 Oct 2022 • Zhuo Chen, Wen Zhang, Yufeng Huang, Mingyang Chen, Yuxia Geng, Hongtao Yu, Zhen Bi, Yichi Zhang, Zhen Yao, Wenting Song, Xinliang Wu, Yi Yang, Mingyi Chen, Zhaoyang Lian, YingYing Li, Lei Cheng, Huajun Chen
In this work, we share our experience on tele-knowledge pre-training for fault analysis, a crucial task in telecommunication applications that requires a wide range of knowledge normally found in both machine log data and product documents.
1 code implementation • 19 Oct 2022 • Yunzhi Yao, Shengyu Mao, Ningyu Zhang, Xiang Chen, Shumin Deng, Xi Chen, Huajun Chen
With the development of pre-trained language models, many prompt-based approaches to data-efficient knowledge graph construction have been proposed and achieved impressive performance.
2 code implementations • 19 Oct 2022 • Xin Xu, Xiang Chen, Ningyu Zhang, Xin Xie, Xi Chen, Huajun Chen
This paper presents an empirical study to build relation extraction systems in low-resource settings.
1 code implementation • 8 Oct 2022 • Yuxia Geng, Jiaoyan Chen, Jeff Z. Pan, Mingyang Chen, Song Jiang, Wen Zhang, Huajun Chen
Subgraph reasoning with message passing is a promising and popular solution.
2 code implementations • 1 Oct 2022 • Ningyu Zhang, Lei LI, Xiang Chen, Xiaozhuan Liang, Shumin Deng, Huajun Chen
Analogical reasoning is fundamental to human cognition and holds an important place in various fields.
1 code implementation • 30 Sep 2022 • Shumin Deng, Chengming Wang, Zhoubo Li, Ningyu Zhang, Zelin Dai, Hehong Chen, Feiyu Xiong, Ming Yan, Qiang Chen, Mosha Chen, Jiaoyan Chen, Jeff Z. Pan, Bryan Hooi, Huajun Chen
We release all the open resources (OpenBG benchmarks) derived from it for the community and report experimental results of KG-centric tasks.
no code implementations • 19 Sep 2022 • Zezhong Xu, Wen Zhang, Peng Ye, Hui Chen, Huajun Chen
In this work, we propose a Neural and Symbolic Entangled framework (ENeSy) for complex query answering, which enables the neural and symbolic reasoning to enhance each other to alleviate the cascading error and KG incompleteness.
1 code implementation • 4 Jul 2022 • Zhuo Chen, Yufeng Huang, Jiaoyan Chen, Yuxia Geng, Wen Zhang, Yin Fang, Jeff Z. Pan, Huajun Chen
Specifically, we (1) developed a cross-modal semantic grounding network to investigate the model's capability of disentangling semantic attributes from the images; (2) applied an attribute-level contrastive learning strategy to further enhance the model's discrimination on fine-grained visual characteristics against the attribute co-occurrence and imbalance; (3) proposed a multi-task learning policy for considering multi-model objectives.
Ranked #1 on
Zero-Shot Learning
on CUB-200-2011
1 code implementation • 8 Jun 2022 • Yuxia Geng, Jiaoyan Chen, Wen Zhang, Yajing Xu, Zhuo Chen, Jeff Z. Pan, Yufeng Huang, Feiyu Xiong, Huajun Chen
In this paper, we focus on ontologies for augmenting ZSL, and propose to learn disentangled ontology embeddings guided by ontology properties to capture and utilize more fine-grained class relationships in different aspects.
2 code implementations • 29 May 2022 • Xiang Chen, Lei LI, Ningyu Zhang, Xiaozhuan Liang, Shumin Deng, Chuanqi Tan, Fei Huang, Luo Si, Huajun Chen
Specifically, vanilla prompt learning may struggle to utilize atypical instances by rote during fully-supervised training or overfit shallow patterns with low-shot data.
no code implementations • 27 May 2022 • Siyuan Cheng, Xiaozhuan Liang, Zhen Bi, Huajun Chen, Ningyu Zhang
Existing data-centric methods for protein science generally cannot sufficiently capture and leverage biology knowledge, which may be crucial for many protein tasks.
1 code implementation • 22 May 2022 • Yincen Qu, Ningyu Zhang, Hui Chen, Zelin Dai, Zezhong Xu, Chengming Wang, Xiaoyu Wang, Qiang Chen, Huajun Chen
In addition to formulating the new task, we also release a new Benchmark dataset of Salience Evaluation in E-commerce (BSEE) and hope to promote related research on commonsense knowledge salience evaluation.
1 code implementation • 10 May 2022 • Mingyang Chen, Wen Zhang, Zhen Yao, Xiangnan Chen, Mengxiao Ding, Fei Huang, Huajun Chen
We study the knowledge extrapolation problem to embed new components (i. e., entities and relations) that come with emerging knowledge graphs (KGs) in the federated setting.
1 code implementation • 7 May 2022 • Xiang Chen, Ningyu Zhang, Lei LI, Yunzhi Yao, Shumin Deng, Chuanqi Tan, Fei Huang, Luo Si, Huajun Chen
To deal with these issues, we propose a novel Hierarchical Visual Prefix fusion NeTwork (HVPNeT) for visual-enhanced entity and relation extraction, aiming to achieve more effective and robust performance.
1 code implementation • 4 May 2022 • Xiang Chen, Ningyu Zhang, Lei LI, Shumin Deng, Chuanqi Tan, Changliang Xu, Fei Huang, Luo Si, Huajun Chen
Since most MKGs are far from complete, extensive knowledge graph completion studies have been proposed focusing on the multimodal entity, relation extraction and link prediction.
1 code implementation • 4 May 2022 • Xiang Chen, Lei LI, Ningyu Zhang, Chuanqi Tan, Fei Huang, Luo Si, Huajun Chen
Note that the previous parametric learning paradigm can be viewed as memorization regarding training data as a book and inference as the close-book test.
1 code implementation • 9 Apr 2022 • Xiaozhuan Liang, Ningyu Zhang, Siyuan Cheng, Zhenru Zhang, Chuanqi Tan, Huajun Chen
Pretrained language models can be effectively stimulated by textual prompts or demonstrations, especially in low-data scenarios.
1 code implementation • Findings (ACL) 2022 • Lingbing Guo, Yuqiang Han, Qiang Zhang, Huajun Chen
Embedding-based methods have attracted increasing attention in recent entity alignment (EA) studies.
no code implementations • 2 Mar 2022 • Wen Zhang, Chi-Man Wong, Ganqinag Ye, Bo Wen, Hongting Zhou, Wei zhang, Huajun Chen
On the one hand, it could provide item knowledge services in a uniform way with service vectors for embedding-based and item-knowledge-related task models without accessing triple data.
1 code implementation • 25 Feb 2022 • Wen Zhang, Xiangnan Chen, Zhen Yao, Mingyang Chen, Yushan Zhu, Hongtao Yu, Yufeng Huang, Zezhong Xu, Yajing Xu, Ningyu Zhang, Zonggang Yuan, Feiyu Xiong, Huajun Chen
NeuralKG is an open-source Python-based library for diverse representation learning of knowledge graphs.
no code implementations • 18 Feb 2022 • Lingbing Guo, Qiang Zhang, Huajun Chen
Our experiments demonstrate DET has achieved superior performance compared to the respective state-of-the-art methods in dealing with molecules, networks and knowledge graphs with various sizes.
no code implementations • 15 Feb 2022 • Wen Zhang, Jiaoyan Chen, Juan Li, Zezhong Xu, Jeff Z. Pan, Huajun Chen
Knowledge graph (KG) reasoning is becoming increasingly popular in both academia and industry.
no code implementations • 7 Feb 2022 • Qiang Zhang, Zeyuan Wang, Yuqiang Han, Haoran Yu, Xurui Jin, Huajun Chen
To incorporate conformational knowledge to PTPMs, we propose an interaction-conformation prompt (IC prompt) that is learned through back-propagation with the protein-protein interaction task.
1 code implementation • 4 Feb 2022 • Xin Xie, Ningyu Zhang, Zhoubo Li, Shumin Deng, Hui Chen, Feiyu Xiong, Mosha Chen, Huajun Chen
Knowledge graph completion aims to address the problem of extending a KG with missing triples.
Ranked #49 on
Link Prediction
on FB15k-237
no code implementations • 27 Jan 2022 • Hongbin Ye, Ningyu Zhang, Shumin Deng, Xiang Chen, Hui Chen, Feiyu Xiong, Xi Chen, Huajun Chen
Specifically, we develop the ontology transformation based on the external knowledge graph to address the knowledge missing issue, which fulfills and converts structure knowledge to text.
1 code implementation • ICLR 2022 • Ningyu Zhang, Zhen Bi, Xiaozhuan Liang, Siyuan Cheng, Haosen Hong, Shumin Deng, Jiazhang Lian, Qiang Zhang, Huajun Chen
We construct a novel large-scale knowledge graph that consists of GO and its related proteins, and gene annotation texts or protein sequences describe all nodes in the graph.
1 code implementation • 15 Jan 2022 • Yunzhi Yao, Shaohan Huang, Li Dong, Furu Wei, Huajun Chen, Ningyu Zhang
In this work, we propose a simple model, Kformer, which takes advantage of the knowledge stored in PTMs and external knowledge via knowledge injection in Transformer FFN layers.
1 code implementation • 10 Jan 2022 • Ningyu Zhang, Xin Xu, Liankuan Tao, Haiyang Yu, Hongbin Ye, Shuofei Qiao, Xin Xie, Xiang Chen, Zhoubo Li, Lei LI, Xiaozhuan Liang, Yunzhi Yao, Shumin Deng, Peng Wang, Wen Zhang, Zhenru Zhang, Chuanqi Tan, Qiang Chen, Feiyu Xiong, Fei Huang, Guozhou Zheng, Huajun Chen
We present an open-source and extensible knowledge extraction toolkit DeepKE, supporting complicated low-resource, document-level and multimodal scenarios in the knowledge base population.
Attribute Extraction
Cross-Domain Named Entity Recognition
+4
no code implementations • 18 Dec 2021 • Jiaoyan Chen, Yuxia Geng, Zhuo Chen, Jeff Z. Pan, Yuan He, Wen Zhang, Ian Horrocks, Huajun Chen
Machine learning especially deep neural networks have achieved great success but many of them often rely on a number of labeled samples for supervision.
no code implementations • 16 Dec 2021 • Wen Zhang, Shumin Deng, Mingyang Chen, Liang Wang, Qiang Chen, Feiyu Xiong, Xiangwen Liu, Huajun Chen
We first identity three important desiderata for e-commerce KG systems: 1) attentive reasoning, reasoning over a few target relations of more concerns instead of all; 2) explanation, providing explanations for a prediction to help both users and business operators understand why the prediction is made; 3) transferable rules, generating reusable rules to accelerate the deployment of a KG to new systems.
no code implementations • 8 Dec 2021 • Ganqiang Ye, Wen Zhang, Zhen Bi, Chi Man Wong, Chen Hui, Huajun Chen
Representation learning models for Knowledge Graphs (KG) have proven to be effective in encoding structural information and performing reasoning over KGs.
no code implementations • 2 Dec 2021 • Shumin Deng, Jiacheng Yang, Hongbin Ye, Chuanqi Tan, Mosha Chen, Songfang Huang, Fei Huang, Huajun Chen, Ningyu Zhang
Previous works leverage logical forms to facilitate logical knowledge-conditioned text generation.
1 code implementation • 1 Dec 2021 • Yin Fang, Qiang Zhang, Haihong Yang, Xiang Zhuang, Shumin Deng, Wen Zhang, Ming Qin, Zhuo Chen, Xiaohui Fan, Huajun Chen
To address these issues, we construct a Chemical Element Knowledge Graph (KG) to summarize microscopic associations between elements and propose a novel Knowledge-enhanced Contrastive Learning (KCL) framework for molecular representation learning.
1 code implementation • 27 Oct 2021 • Mingyang Chen, Wen Zhang, Yushan Zhu, Hongting Zhou, Zonggang Yuan, Changliang Xu, Huajun Chen
In this paper, to achieve inductive knowledge graph embedding, we propose a model MorsE, which does not learn embeddings for entities but learns transferable meta-knowledge that can be used to produce entity embeddings.
no code implementations • 21 Oct 2021 • Lingbing Guo, Zequn Sun, Mingyang Chen, Wei Hu, Qiang Zhang, Huajun Chen
Embedding-based entity alignment (EEA) has recently received great attention.
no code implementations • 1 Oct 2021 • Hongbin Ye, Ningyu Zhang, Zhen Bi, Shumin Deng, Chuanqi Tan, Hui Chen, Fei Huang, Huajun Chen
Event argument extraction (EAE) is an important task for information extraction to discover specific argument roles.
no code implementations • 29 Sep 2021 • Wen Zhang, Mingyang Chen, Zezhong Xu, Yushan Zhu, Huajun Chen
KGExplainer is a multi-hop reasoner learning latent rules for link prediction and is encouraged to behave similarly to KGEs during prediction through knowledge distillation.
1 code implementation • COLING 2022 • Xiang Chen, Lei LI, Shumin Deng, Chuanqi Tan, Changliang Xu, Fei Huang, Luo Si, Huajun Chen, Ningyu Zhang
Most NER methods rely on extensive labeled data for model training, which struggles in the low-resource scenarios with limited training data.
4 code implementations • ICLR 2022 • Ningyu Zhang, Luoqiu Li, Xiang Chen, Shumin Deng, Zhen Bi, Chuanqi Tan, Fei Huang, Huajun Chen
Large-scale pre-trained language models have contributed significantly to natural language processing by demonstrating remarkable abilities as few-shot learners.
Ranked #1 on
Few-Shot Learning
on CR
1 code implementation • 20 Aug 2021 • Yushan Zhu, Huaixiao Tou, Wen Zhang, Ganqiang Ye, Hui Chen, Ningyu Zhang, Huajun Chen
In this paper, we address multi-modal pretraining of product data in the field of E-commerce.
2 code implementations • 12 Jul 2021 • Zhuo Chen, Jiaoyan Chen, Yuxia Geng, Jeff Z. Pan, Zonggang Yuan, Huajun Chen
Incorporating external knowledge to Visual Question Answering (VQA) has become a vital practical need.
Ranked #1 on
Visual Question Answering (VQA)
on F-VQA
1 code implementation • 29 Jun 2021 • Yuxia Geng, Jiaoyan Chen, Xiang Zhuang, Zhuo Chen, Jeff Z. Pan, Juan Li, Zonggang Yuan, Huajun Chen
different ZSL methods.
2 code implementations • 7 Jun 2021 • Ningyu Zhang, Xiang Chen, Xin Xie, Shumin Deng, Chuanqi Tan, Mosha Chen, Fei Huang, Luo Si, Huajun Chen
Specifically, we leverage an encoder module to capture the context information of entities and a U-shaped segmentation module over the image-style feature map to capture global interdependency among triples.
Ranked #4 on
Relation Extraction
on ReDocRED
1 code implementation • 3 Jun 2021 • Ningyu Zhang, Qianghuai Jia, Shumin Deng, Xiang Chen, Hongbin Ye, Hui Chen, Huaixiao Tou, Gang Huang, Zhao Wang, Nengwei Hua, Huajun Chen
Conceptual graphs, which is a particular type of Knowledge Graphs, play an essential role in semantic search.
1 code implementation • ACL 2021 • Shumin Deng, Ningyu Zhang, Luoqiu Li, Hui Chen, Huaixiao Tou, Mosha Chen, Fei Huang, Huajun Chen
Most of current methods to ED rely heavily on training instances, and almost ignore the correlation of event types.
1 code implementation • ACL 2021 • Dongfang Lou, Zhilin Liao, Shumin Deng, Ningyu Zhang, Huajun Chen
We consider the problem of collectively detecting multiple events, particularly in cross-sentence settings.
no code implementations • 2 May 2021 • Wen Zhang, Chi-Man Wong, Ganqiang Ye, Bo Wen, Wei zhang, Huajun Chen
As a backbone for online shopping platforms, we built a billion-scale e-commerce product knowledge graph for various item knowledge services such as item recommendation.
no code implementations • 30 Apr 2021 • Chi-Man Wong, Fan Feng, Wen Zhang, Chi-Man Vong, Hui Chen, Yichi Zhang, Peng He, Huan Chen, Kun Zhao, Huajun Chen
We first construct a billion-scale conversation knowledge graph (CKG) from information about users, items and conversations, and then pretrain CKG by introducing knowledge graph embedding method and graph convolution network to encode semantic and structural information respectively. To make the CTR prediction model sensible of current state of users and the relationship between dialogues and items, we introduce user-state and dialogue-interaction representations based on pre-trained CKG and propose K-DCN. In K-DCN, we fuse the user-state representation, dialogue-interaction representation and other normal feature representations via deep cross network, which will give the rank of candidate items to be recommended. We experimentally prove that our proposal significantly outperforms baselines and show it's real application in Alime.
no code implementations • 20 Apr 2021 • Zhen Bi, Ningyu Zhang, Ganqiang Ye, Haiyang Yu, Xi Chen, Huajun Chen
Recent neural-based aspect-based sentiment analysis approaches, though achieving promising improvement on benchmark datasets, have reported suffering from poor robustness when encountering confounder such as non-target aspects.
Aspect-Based Sentiment Analysis
Aspect-Based Sentiment Analysis (ABSA)
+1
1 code implementation • 15 Apr 2021 • Xiang Chen, Ningyu Zhang, Xin Xie, Shumin Deng, Yunzhi Yao, Chuanqi Tan, Fei Huang, Luo Si, Huajun Chen
To this end, we focus on incorporating knowledge among relation labels into prompt-tuning for relation extraction and propose a Knowledge-aware Prompt-tuning approach with synergistic optimization (KnowPrompt).
Ranked #5 on
Dialog Relation Extraction
on DialogRE
(F1 (v1) metric)
1 code implementation • 11 Apr 2021 • Xiang Chen, Xin Xie, Zhen Bi, Hongbin Ye, Shumin Deng, Ningyu Zhang, Huajun Chen
Although the self-supervised pre-training of transformer models has resulted in the revolutionizing of natural language processing (NLP) applications and the achievement of state-of-the-art results with regard to various benchmarks, this process is still vulnerable to small and imperceptible permutations originating from legitimate inputs.
1 code implementation • 1 Apr 2021 • Luoqiu Li, Xiang Chen, Zhen Bi, Xin Xie, Shumin Deng, Ningyu Zhang, Chuanqi Tan, Mosha Chen, Huajun Chen
Recent neural-based relation extraction approaches, though achieving promising improvement on benchmark datasets, have reported their vulnerability towards adversarial attacks.
no code implementations • 24 Mar 2021 • Yin Fang, Haihong Yang, Xiang Zhuang, Xin Shao, Xiaohui Fan, Huajun Chen
Leveraging domain knowledge including fingerprints and functional groups in molecular representation learning is crucial for chemical property prediction and drug discovery.
1 code implementation • 26 Feb 2021 • Jiaoyan Chen, Yuxia Geng, Zhuo Chen, Ian Horrocks, Jeff Z. Pan, Huajun Chen
Zero-shot learning (ZSL) which aims at predicting classes that have never appeared during the training using external knowledge (a. k. a.
1 code implementation • SEMEVAL 2021 • Xin Xie, Xiangnan Chen, Xiang Chen, Yong Wang, Ningyu Zhang, Shumin Deng, Huajun Chen
This paper presents our systems for the three Subtasks of SemEval Task4: Reading Comprehension of Abstract Meaning (ReCAM).
Ranked #1 on
Reading Comprehension
on ReCAM
(using extra training data)
1 code implementation • 15 Feb 2021 • Yuxia Geng, Jiaoyan Chen, Zhuo Chen, Jeff Z. Pan, Zhiquan Ye, Zonggang Yuan, Yantao Jia, Huajun Chen
The key of implementing ZSL is to leverage the prior knowledge of classes which builds the semantic relationship between classes and enables the transfer of the learned models (e. g., features) from training classes (i. e., seen classes) to unseen classes.
no code implementations • 1 Jan 2021 • Lingbing Guo, Zequn Sun, Mingyang Chen, Wei Hu, Huajun Chen
In this paper, we define a typical paradigm abstracted from the existing methods, and analyze how the representation discrepancy between two potentially-aligned entities is implicitly bounded by a predefined margin in the scoring function for embedding learning.
no code implementations • 1 Jan 2021 • Ningyu Zhang, Xiang Chen, Xin Xie, Shumin Deng, Yantao Jia, Zonggang Yuan, Huajun Chen
Although the self-supervised pre-training of transformer models has resulted in the revolutionizing of natural language processing (NLP) applications and the achievement of state-of-the-art results with regard to various benchmarks, this process is still vulnerable to small and imperceptible permutations originating from legitimate inputs.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Ningyu Zhang, Shumin Deng, Juan Li, Xi Chen, Wei zhang, Huajun Chen
It is desirable to generate answer summaries for online search engines, particularly summaries that can reveal direct answers to questions.
no code implementations • COLING 2020 • Juan Li, Ruoxu Wang, Ningyu Zhang, Wen Zhang, Fan Yang, Huajun Chen
To recognize unseen relations at test time, we explore the problem of zero-shot relation classification.
no code implementations • COLING 2020 • Haiyang Yu, Ningyu Zhang, Shumin Deng, Hongbin Ye, Wei zhang, Huajun Chen
Current supervised relational triple extraction approaches require huge amounts of labeled data and thus suffer from poor performance in few-shot settings.
2 code implementations • 24 Oct 2020 • Mingyang Chen, Wen Zhang, Zonggang Yuan, Yantao Jia, Huajun Chen
Knowledge graphs (KGs) consisting of triples are always incomplete, so it's important to do Knowledge Graph Completion (KGC) by predicting missing triples.
1 code implementation • EMNLP 2020 • Ningyu Zhang, Shumin Deng, Zhen Bi, Haiyang Yu, Jiacheng Yang, Mosha Chen, Fei Huang, Wei zhang, Huajun Chen
We introduce a prototype model and provide an open-source and extensible toolkit called OpenUE for various extraction tasks.
Ranked #3 on
Joint Entity and Relation Extraction
on WebNLG
1 code implementation • 15 Sep 2020 • Haiyang Yu, Ningyu Zhang, Shumin Deng, Zonggang Yuan, Yantao Jia, Huajun Chen
Long-tailed relation classification is a challenging problem as the head classes may dominate the training phase, thereby leading to the deterioration of the tail performance.
1 code implementation • 14 Sep 2020 • Luoqiu Li, Xiang Chen, Hongbin Ye, Zhen Bi, Shumin Deng, Ningyu Zhang, Huajun Chen
Fine-tuning pre-trained models have achieved impressive performance on standard natural language processing benchmarks.
no code implementations • 14 Sep 2020 • Hongbin Ye, Ningyu Zhang, Shumin Deng, Mosha Chen, Chuanqi Tan, Fei Huang, Huajun Chen
In this paper, we revisit the end-to-end triple extraction task for sequence generation.
Ranked #9 on
Relation Extraction
on WebNLG
no code implementations • 13 Sep 2020 • Yushan Zhu, Wen Zhang, Mingyang Chen, Hui Chen, Xu Cheng, Wei zhang, Huajun Chen
In DualDE, we propose a soft label evaluation mechanism to adaptively assign different soft label and hard label weights to different triples, and a two-stage distillation approach to improve the student's acceptance of the teacher.
no code implementations • ACL 2020 • Zhiquan Ye, Yuxia Geng, Jiaoyan Chen, Jingmin Chen, Xiaoxiao Xu, SuHang Zheng, Feng Wang, Jun Zhang, Huajun Chen
In this situation, transferring from seen classes to unseen classes is extremely hard.
1 code implementation • 30 Jun 2020 • Jiaoyan Chen, Freddy Lecue, Yuxia Geng, Jeff Z. Pan, Huajun Chen
Zero-shot learning (ZSL) is a popular research problem that aims at predicting for those classes that have never appeared in the training stage by utilizing the inter-class relationship with some side information.
1 code implementation • 1 May 2020 • Junyou Li, Gong Cheng, Qingxia Liu, Wen Zhang, Evgeny Kharlamov, Kalpa Gunaratna, Huajun Chen
In a large-scale knowledge graph (KG), an entity is often described by a large number of triple-structured facts.
no code implementations • 7 Apr 2020 • Yuxia Geng, Jiaoyan Chen, Zhuo Chen, Zhiquan Ye, Zonggang Yuan, Yantao Jia, Huajun Chen
However, the side information of classes used now is limited to text descriptions and attribute annotations, which are in short of semantics of the classes.
no code implementations • 8 Nov 2019 • Ningyu Zhang, Shumin Deng, Zhanlin Sun, Jiaoayan Chen, Wei zhang, Huajun Chen
Specifically, the framework takes advantage of a relation discriminator to distinguish between samples from different relations, and help learn relation-invariant features more transferable from source relations to target relations.
1 code implementation • 25 Oct 2019 • Shumin Deng, Ningyu Zhang, Jiaojian Kang, Yichi Zhang, Wei zhang, Huajun Chen
Differing from vanilla prototypical networks simply computing event prototypes by averaging, which only consume event mentions once, our model is more robust and is capable of distilling contextual information from event mentions for multiple times due to the multi-hop mechanism of DMNs.
1 code implementation • IJCNLP 2019 • Mingyang Chen, Wen Zhang, Wei zhang, Qiang Chen, Huajun Chen
Link prediction is an important way to complete knowledge graphs (KGs), while embedding-based methods, effective for link prediction in KGs, perform poorly on relations that only have a few associative triples.
no code implementations • 22 Aug 2019 • Shumin Deng, Ningyu Zhang, Zhanlin Sun, Jiaoyan Chen, Huajun Chen
Text classification tends to be difficult when data are deficient or when it is required to adapt to unseen classes.
Ranked #1 on
Multi-Domain Sentiment Classification
on ARSC
no code implementations • 22 Aug 2019 • Ningyu Zhang, Shumin Deng, Zhanlin Sun, Jiaoyan Chen, Wei zhang, Huajun Chen
However, the human annotation is expensive, while human-crafted patterns suffer from semantic drift and distant supervision samples are usually noisy.
no code implementations • 31 May 2019 • Freddy Lecue, Jiaoyan Chen, Jeff Z. Pan, Huajun Chen
We exploit their semantics to augment transfer learning by dealing with when to transfer with semantic measurements and what to transfer with semantic embeddings.
no code implementations • ICLR 2019 • Haihong Yang, Han Wang, Shuang Guo, Wei zhang, Huajun Chen
Our model consists of two parts: (i) a novel learning-to-decompose agent that learns a policy to decompose a compound question into simple questions and (ii) three independent simple-question answerers that classify the corresponding relations for each simple question.
no code implementations • 21 Mar 2019 • Wen Zhang, Bibek Paudel, Liang Wang, Jiaoyan Chen, Hai Zhu, Wei zhang, Abraham Bernstein, Huajun Chen
We also evaluate the efficiency of rule learning and quality of rules from IterE compared with AMIE+, showing that IterE is capable of generating high quality rules more efficiently.
no code implementations • 12 Mar 2019 • Wen Zhang, Bibek Paudel, Wei zhang, Abraham Bernstein, Huajun Chen
Knowledge graph embedding aims to learn distributed representations for entities and relations, and is proven to be effective in many applications.
no code implementations • NAACL 2019 • Ningyu Zhang, Shumin Deng, Zhanlin Sun, Guanying Wang, Xi Chen, Wei zhang, Huajun Chen
Here, the challenge is to learn accurate "few-shot" models for classes existing at the tail of the class distribution, for which little data is available.
no code implementations • 20 Jan 2019 • Yuxia Geng, Jiaoyan Chen, Ernesto Jimenez-Ruiz, Huajun Chen
Transfer learning which aims at utilizing knowledge learned from one problem (source domain) to solve another different but related problem (target domain) has attracted wide research attentions.
1 code implementation • EMNLP 2018 • Ningyu Zhang, Shumin Deng, Zhanlin Sun, Xi Chen, Wei zhang, Huajun Chen
A capsule is a group of neurons, whose activity vector represents the instantiation parameters of a specific type of entity.
no code implementations • EMNLP 2018 • Guanying Wang, Wen Zhang, Ruoxu Wang, Yalin Zhou, Xi Chen, Wei zhang, Hai Zhu, Huajun Chen
This paper proposes a label-free distant supervision method, which makes no use of the relation labels under this inadequate assumption, but only uses the prior knowledge derived from the KG to supervise the learning of the classifier directly and softly.
1 code implementation • 22 Jul 2018 • Jiaoyan Chen, Freddy Lecue, Jeff Z. Pan, Ian Horrocks, Huajun Chen
Machine learning explanation can significantly boost machine learning's application in decision making, but the usability of current methods is limited in human-centric explanation, especially for transfer learning, an important machine learning branch that aims at utilizing knowledge from one learning domain (i. e., a pair of dataset and prediction task) to enhance prediction model training in another learning domain.
no code implementations • 24 Apr 2017 • Freddy Lecue, Jiaoyan Chen, Jeff Pan, Huajun Chen
Data stream learning has been largely studied for extracting knowledge structures from continuous and rapid data records.