1 code implementation • COLING 2022 • Shirong Shen, Heng Zhou, Tongtong Wu, Guilin Qi
This paper studies event causality identification, which aims at predicting the causality relation for a pair of events in a sentence.
no code implementations • 2 Nov 2024 • Jianyu Liu, Sheng Bi, Guilin Qi
Open rule refer to the implication from premise atoms to hypothesis atoms, which captures various relations between instances in the real world.
1 code implementation • 26 Oct 2024 • Dehai Min, Zhiyang Xu, Guilin Qi, Lifu Huang, Chenyu You
In this paper, we introduce UniHGKR, a unified instruction-aware heterogeneous knowledge retriever that (1) builds a unified retrieval space for heterogeneous knowledge and (2) follows diverse user instructions to retrieve knowledge of specified types.
1 code implementation • 29 Sep 2024 • Yike Wu, Yi Huang, Nan Hu, Yuncheng Hua, Guilin Qi, Jiaoyan Chen, Jeff Z. Pan
Recent studies have explored the use of Large Language Models (LLMs) with Retrieval Augmented Generation (RAG) for Knowledge Graph Question Answering (KGQA).
no code implementations • 23 Jul 2024 • Tingting Wang, Guilin Qi
The complex dependencies and propagative faults inherent in microservices, characterized by a dense network of interconnected services, pose significant challenges in identifying the underlying causes of issues.
no code implementations • 25 Jun 2024 • Keyu Wang, Guilin Qi, Jiaqi Li, Songlin Zhai
With extensive experiments, we demonstrate both the effectiveness and limitations of LLMs in understanding DL-Lite ontologies.
no code implementations • 21 May 2024 • Jiaqi Li, Qianshan Wei, Chuanyi Zhang, Guilin Qi, Miaozeng Du, Yongrui Chen, Sheng Bi
Alongside our method, we establish MMUBench, a new benchmark for MU in MLLMs and introduce a collection of metrics for its evaluation.
1 code implementation • 20 Apr 2024 • Jingqi Kang, Tongtong Wu, Jinming Zhao, Guitao Wang, Yinwei Wei, Hao Yang, Guilin Qi, Yuan-Fang Li, Gholamreza Haffari
To address the challenges of catastrophic forgetting and effective disentanglement, we propose a novel method, 'Double Mixture.'
no code implementations • 28 Mar 2024 • Rihui Jin, Yu Li, Guilin Qi, Nan Hu, Yuan-Fang Li, Jiaoyan Chen, Jianan Wang, Yongrui Chen, Dehai Min, Sheng Bi
Table understanding (TU) has achieved promising advancements, but it faces the challenges of the scarcity of manually labeled tables and the presence of complex table structures. To address these challenges, we propose HGT, a framework with a heterogeneous graph (HG)-enhanced large language model (LLM) to tackle few-shot TU tasks. It leverages the LLM by aligning the table semantics with the LLM's parametric knowledge through soft prompts and instruction turning and deals with complex tables by a multi-task pre-training scheme involving three novel multi-granularity self-supervised HG pre-training objectives. We empirically demonstrate the effectiveness of HGT, showing that it outperforms the SOTA for few-shot complex TU on several benchmarks.
1 code implementation • 28 Mar 2024 • Yu Li, Shenyu Zhang, Rui Wu, Xiutian Huang, Yongrui Chen, Wenhao Xu, Guilin Qi, Dehai Min
Experimental results show that our framework outperforms existing open-ended text evaluation methods and achieves the highest correlation with human evaluation, which confirms the effectiveness and advancement of our framework in addressing the uncertainties and instabilities in evaluating LLMs-generated text.
no code implementations • 18 Mar 2024 • Shenyu Zhang, Yu Li, Rui Wu, Xiutian Huang, Yongrui Chen, Wenhao Xu, Guilin Qi
Automatic methods for evaluating machine-generated texts hold significant importance due to the expanding applications of generative systems.
no code implementations • 20 Feb 2024 • Dehai Min, Nan Hu, Rihui Jin, Nuo Lin, Jiaoyan Chen, Yongrui Chen, Yu Li, Guilin Qi, Yun Li, Nijun Li, Qianren Wang
Table-to-Text Generation is a promising solution by facilitating the transformation of hybrid data into a uniformly text-formatted corpus.
no code implementations • 18 Feb 2024 • Jiaqi Li, Miaozeng Du, Chuanyi Zhang, Yongrui Chen, Nan Hu, Guilin Qi, Haiyun Jiang, Siyuan Cheng, Bozhong Tian
Multimodal knowledge editing represents a critical advancement in enhancing the capabilities of Multimodal Large Language Models (MLLMs).
no code implementations • 18 Feb 2024 • Xinbang Dai, Yuncheng Hua, Tongtong Wu, Yang Sheng, Qiu Ji, Guilin Qi
To elucidate this, we design a series of experiments to explore LLMs' understanding of different KG input formats within the context of prompt engineering.
no code implementations • 18 Feb 2024 • Xinbang Dai, Huiying Li, Guilin Qi
While the research community's focus on Knowledge Graph Question Answering (KGQA), the field of answering questions incorporating both spatio-temporal information based on STKGs remains largely unexplored.
no code implementations • 11 Feb 2024 • Tingting Wang, Guilin Qi, Tianxing Wu
To achieve this, KGroot uses event knowledge and the correlation between events to perform root cause reasoning by integrating knowledge graphs and GCNs for RCA.
no code implementations • 7 Feb 2024 • Amin Ullah, Guilin Qi, Saddam Hussain, Irfan Ullah, Zafar Ali
Smart cities stand as pivotal components in the ongoing pursuit of elevating urban living standards, facilitating the rapid expansion of urban areas while efficiently managing resources through sustainable and scalable innovations.
1 code implementation • 27 Jan 2024 • Jingqi Kang, Tongtong Wu, Jinming Zhao, Guitao Wang, Guilin Qi, Yuan-Fang Li, Gholamreza Haffari
While text-based event extraction has been an active research area and has seen successful application in many domains, extracting semantic events from speech directly is an under-explored problem.
no code implementations • 26 Jan 2024 • Nan Hu, Jiaoyan Chen, Yike Wu, Guilin Qi, Sheng Bi, Tongtong Wu, Jeff Z. Pan
The attribution of question answering is to provide citations for supporting generated statements, and has attracted wide research attention.
1 code implementation • 20 Jan 2024 • Keyu Wang, Guilin Qi, Jiaoyan Chen, Yi Huang, Tianxing Wu
Extensional knowledge provides information about the concrete instances that belong to specific concepts in the ontology, while intensional knowledge details inherent properties, characteristics, and semantic associations among concepts.
no code implementations • 27 Oct 2023 • Qiu Ji, Guilin Qi, Yuxin Ye, Jiaye Li, Site Li, Jianjie Ren, Songtao Lu
We conduct experiments over 19 ontology pairs and compare our algorithms and scoring functions with existing ones.
1 code implementation • 12 Oct 2023 • Jiaqi Li, Guilin Qi, Chuanyi Zhang, Yongrui Chen, Yiming Tan, Chenlong Xia, Ye Tian
Firstly we retrieve the relevant embedding from the knowledge graph by utilizing group relations in metadata and then integrate it with other modalities.
1 code implementation • 20 Sep 2023 • Yike Wu, Nan Hu, Sheng Bi, Guilin Qi, Jie Ren, Anhuan Xie, Wei Song
To this end, we propose an answer-sensitive KG-to-Text approach that can transform KG knowledge into well-textualized statements most informative for KGQA.
1 code implementation • 11 Sep 2023 • Yongrui Chen, Haiyun Jiang, Xinting Huang, Shuming Shi, Guilin Qi
In particular, compared to the best-performing baseline, the LLM trained using our generated dataset exhibits a 10\% relative improvement in performance on AlpacaEval, despite utilizing only 1/5 of its training data.
1 code implementation • 4 Apr 2023 • Keyu Wang, Site Li, Jiaye Li, Guilin Qi, Qiu Ji
A natural way to reason with an inconsistent ontology is to utilize the maximal consistent subsets of the ontology.
no code implementations • 18 Mar 2023 • Nan Hu, Yike Wu, Guilin Qi, Dehai Min, Jiaoyan Chen, Jeff Z. Pan, Zafar Ali
Large-scale pre-trained language models (PLMs) such as BERT have recently achieved great success and become a milestone in natural language processing (NLP).
2 code implementations • 14 Mar 2023 • Yiming Tan, Dehai Min, Yu Li, Wenbo Li, Nan Hu, Yongrui Chen, Guilin Qi
ChatGPT is a powerful large language model (LLM) that covers knowledge resources such as Wikipedia and supports natural language question answering using its own knowledge.
Ranked #1 on Question Answering on WebQuestionsSP
1 code implementation • 21 Nov 2022 • Yongrui Chen, Xinnan Guo, Tongtong Wu, Guilin Qi, Yang Li, Yang Dong
The first solution Vanilla is to perform self-training, augmenting the supervised training data with predicted pseudo-labeled instances of the current task, while replacing the full volume retraining with episodic memory replay to balance the training efficiency with the performance of previous tasks.
1 code implementation • 17 Oct 2022 • Tongtong Wu, Guitao Wang, Jinming Zhao, Zhaoran Liu, Guilin Qi, Yuan-Fang Li, Gholamreza Haffari
We explore speech relation extraction via two approaches: the pipeline approach conducting text-based extraction with a pretrained ASR module, and the end2end approach via a new proposed encoder-decoder model, or what we called SpeechRE.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +5
no code implementations • 12 Mar 2022 • Kang Xu, Xiaoqiu Lu, Yuan-Fang Li, Tongtong Wu, Guilin Qi, Ning Ye, Dong Wang, Zheng Zhou
NTM-DMIE is a neural network method for topic learning which maximizes the mutual information between the input documents and their latent topic representation.
1 code implementation • 14 Feb 2022 • Rui Wu, Zhaopeng Qiu, Jiacheng Jiang, Guilin Qi, Xian Wu
Medication recommendation targets to provide a proper set of medicines according to patients' diagnoses, which is a critical task in clinics.
no code implementations • 26 Nov 2021 • Shirong Shen, Zhen Li, Guilin Qi
During the selection process, we use an internal-external sample loss ranking method to evaluate the sample importance by using local information.
2 code implementations • 1 Nov 2021 • Yongrui Chen, Huiying Li, Guilin Qi, Tianxing Wu, Tenggou Wang
The high-level decoding generates an AQG as a constraint to prune the search space and reduce the locally ambiguous query graph.
Ranked #1 on Knowledge Base Question Answering on LC-QuAD 1.0
no code implementations • Findings (EMNLP) 2021 • Sheng Bi, Xiya Cheng, Yuan-Fang Li, Lizhen Qu, Shirong Shen, Guilin Qi, Lu Pan, Yinlin Jiang
The ability to generate natural-language questions with controlled complexity levels is highly desirable as it further expands the applicability of question generation.
no code implementations • ICLR 2022 • Tongtong Wu, Massimo Caccia, Zhuang Li, Yuan-Fang Li, Guilin Qi, Gholamreza Haffari
In this paper, we thoroughly compare the continual learning performance over the combination of 5 PLMs and 4 veins of CL methods on 3 benchmarks in 2 typical incremental settings.
1 code implementation • 12 Sep 2021 • Yongrui Chen, Xinnan Guo, Chaojie Wang, Jian Qiu, Guilin Qi, Meng Wang, Huiying Li
Compared to the larger pre-trained model and the tabular-specific pre-trained model, our approach is still competitive.
1 code implementation • 8 Sep 2021 • Yongrui Chen, Huiying Li, Yuncheng Hua, Guilin Qi
However, this candidate generation strategy ignores the structure of queries, resulting in a considerable number of noisy queries.
Ranked #2 on Knowledge Base Question Answering on LC-QuAD 1.0
no code implementations • Findings (ACL) 2021 • Shirong Shen, Tongtong Wu, Guilin Qi, Yuan-Fang Li, Gholamreza Haffari, Sheng Bi
Event detection (ED) aims at detecting event trigger words in sentences and classifying them into specific event types.
no code implementations • 3 Mar 2021 • Waheed Ahmed Abro, Annalena Aicher, Niklas Rach, Stefan Ultes, Wolfgang Minker, Guilin Qi
Intent classifier model stacks BiLSTM with attention mechanism on top of the pre-trained BERT model and fine-tune the model for recognizing the user intent, whereas the argument similarity model employs BERT+BiLSTM for identifying system arguments the user refers to in his or her natural language utterances.
2 code implementations • 6 Jan 2021 • Tongtong Wu, Xuekai Li, Yuan-Fang Li, Reza Haffari, Guilin Qi, Yujin Zhu, Guoqiang Xu
We propose a novel curriculum-meta learning method to tackle the above two challenges in continual relation extraction.
no code implementations • COLING 2020 • Shirong Shen, Guilin Qi, Zhen Li, Sheng Bi, Lusheng Wang
We label a Chinese legal event dataset and evaluate our model on it.
1 code implementation • 29 Oct 2020 • Yuncheng Hua, Yuan-Fang Li, Guilin Qi, Wei Wu, Jingyao Zhang, Daiqing Qi
Our framework consists of a neural generator and a symbolic executor that, respectively, transforms a natural-language question into a sequence of primitive actions, and executes them over the knowledge base to compute the answer.
1 code implementation • 29 Oct 2020 • Yuncheng Hua, Yuan-Fang Li, Gholamreza Haffari, Guilin Qi, Wei Wu
However, this comes at the cost of manually labeling similar questions to learn a retrieval model, which is tedious and expensive.
1 code implementation • EMNLP 2020 • Yuncheng Hua, Yuan-Fang Li, Gholamreza Haffari, Guilin Qi, Tongtong Wu
Our method achieves state-of-the-art performance on the CQA dataset (Saha et al., 2018) while using only five trial trajectories for the top-5 retrieved questions in each support set, and metatraining on tasks constructed from only 1% of the training set.
Knowledge Base Question Answering Meta Reinforcement Learning +3
no code implementations • COLING 2020 • Sheng Bi, Xiya Cheng, Yuan-Fang Li, Yongzhen Wang, Guilin Qi
Question generation over knowledge bases (KBQG) aims at generating natural-language questions about a subgraph, i. e. a set of (connected) triples.
no code implementations • 7 Oct 2020 • Xiya Cheng, Sheng Bi, Guilin Qi, Yongzhen Wang
In this paper, we propose a knowledge-attentive neural network model, which introduces legal schematic knowledge about charges and exploit the knowledge hierarchical representation as the discriminative features to differentiate confusing charges.
1 code implementation • 13 Sep 2020 • Xinyue Zhang, Meng Wang, Muhammad Saleem, Axel-Cyrille Ngonga Ngomo, Guilin Qi, Haofen Wang
Based on Semantic Web technologies, knowledge graphs help users to discover information of interest by using live SPARQL services.
no code implementations • 9 Mar 2020 • Xianpei Han, Zhichun Wang, Jiangtao Zhang, Qinghua Wen, Wenqi Li, Buzhou Tang, Qi. Wang, Zhifan Feng, Yang Zhang, Yajuan Lu, Haitao Wang, Wenliang Chen, Hao Shao, Yubo Chen, Kang Liu, Jun Zhao, Taifeng Wang, Kezun Zhang, Meng Wang, Yinlin Jiang, Guilin Qi, Lei Zou, Sen Hu, Minhao Zhang, Yinnian Lin
Knowledge graph models world knowledge as concepts, entities, and the relationships between them, which has been widely used in many real-world tasks.
no code implementations • 15 Oct 2019 • Tianxing Wu, Arijit Khan, Melvin Yong, Guilin Qi, Meng Wang
Knowledge graph (KG) embedding encodes the entities and relations from a KG into low-dimensional vector spaces to support various applications such as KG completion, question answering, and recommender systems.
no code implementations • 9 Jan 2013 • Xiaowang Zhang, Kewen Wang, Zhe Wang, Yue Ma, Guilin Qi
DL-Lite is an important family of description logics.