1 code implementation • COLING 2022 • Shirong Shen, Heng Zhou, Tongtong Wu, Guilin Qi
This paper studies event causality identification, which aims at predicting the causality relation for a pair of events in a sentence.
no code implementations • 27 Oct 2023 • Qiu Ji, Guilin Qi, Yuxin Ye, Jiaye Li, Site Li, Jianjie Ren, Songtao Lu
We conduct experiments over 19 ontology pairs and compare our algorithms and scoring functions with existing ones.
1 code implementation • 12 Oct 2023 • Jiaqi Li, Guilin Qi, Chuanyi Zhang, Yongrui Chen, Yiming Tan, Chenlong Xia, Ye Tian
Firstly we retrieve the relevant embedding from the knowledge graph by utilizing group relations in metadata and then integrate it with other modalities.
1 code implementation • 20 Sep 2023 • Yike Wu, Nan Hu, Sheng Bi, Guilin Qi, Jie Ren, Anhuan Xie, Wei Song
To this end, we propose an answer-sensitive KG-to-Text approach that can transform KG knowledge into well-textualized statements most informative for KGQA.
no code implementations • 11 Sep 2023 • Yongrui Chen, Haiyun Jiang, Xinting Huang, Shuming Shi, Guilin Qi
High-quality instruction-tuning data is critical to improving LLM capabilities.
1 code implementation • 4 Apr 2023 • Keyu Wang, Site Li, Jiaye Li, Guilin Qi, Qiu Ji
A natural way to reason with an inconsistent ontology is to utilize the maximal consistent subsets of the ontology.
1 code implementation • 18 Mar 2023 • Nan Hu, Yike Wu, Guilin Qi, Dehai Min, Jiaoyan Chen, Jeff Z. Pan, Zafar Ali
Large-scale pre-trained language models (PLMs) such as BERT have recently achieved great success and become a milestone in natural language processing (NLP).
2 code implementations • 14 Mar 2023 • Yiming Tan, Dehai Min, Yu Li, Wenbo Li, Nan Hu, Yongrui Chen, Guilin Qi
ChatGPT is a powerful large language model (LLM) that covers knowledge resources such as Wikipedia and supports natural language question answering using its own knowledge.
Ranked #1 on
Question Answering
on GraphQuestions
1 code implementation • 21 Nov 2022 • Yongrui Chen, Xinnan Guo, Tongtong Wu, Guilin Qi, Yang Li, Yang Dong
The first solution Vanilla is to perform self-training, augmenting the supervised training data with predicted pseudo-labeled instances of the current task, while replacing the full volume retraining with episodic memory replay to balance the training efficiency with the performance of previous tasks.
1 code implementation • 17 Oct 2022 • Tongtong Wu, Guitao Wang, Jinming Zhao, Zhaoran Liu, Guilin Qi, Yuan-Fang Li, Gholamreza Haffari
We explore speech relation extraction via two approaches: the pipeline approach conducting text-based extraction with a pretrained ASR module, and the end2end approach via a new proposed encoder-decoder model, or what we called SpeechRE.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+2
no code implementations • 12 Mar 2022 • Kang Xu, Xiaoqiu Lu, Yuan-Fang Li, Tongtong Wu, Guilin Qi, Ning Ye, Dong Wang, Zheng Zhou
NTM-DMIE is a neural network method for topic learning which maximizes the mutual information between the input documents and their latent topic representation.
1 code implementation • 14 Feb 2022 • Rui Wu, Zhaopeng Qiu, Jiacheng Jiang, Guilin Qi, Xian Wu
Medication recommendation targets to provide a proper set of medicines according to patients' diagnoses, which is a critical task in clinics.
no code implementations • 26 Nov 2021 • Shirong Shen, Zhen Li, Guilin Qi
During the selection process, we use an internal-external sample loss ranking method to evaluate the sample importance by using local information.
2 code implementations • 1 Nov 2021 • Yongrui Chen, Huiying Li, Guilin Qi, Tianxing Wu, Tenggou Wang
The high-level decoding generates an AQG as a constraint to prune the search space and reduce the locally ambiguous query graph.
no code implementations • Findings (EMNLP) 2021 • Sheng Bi, Xiya Cheng, Yuan-Fang Li, Lizhen Qu, Shirong Shen, Guilin Qi, Lu Pan, Yinlin Jiang
The ability to generate natural-language questions with controlled complexity levels is highly desirable as it further expands the applicability of question generation.
no code implementations • ICLR 2022 • Tongtong Wu, Massimo Caccia, Zhuang Li, Yuan-Fang Li, Guilin Qi, Gholamreza Haffari
In this paper, we thoroughly compare the continual learning performance over the combination of 5 PLMs and 4 veins of CL methods on 3 benchmarks in 2 typical incremental settings.
1 code implementation • 12 Sep 2021 • Yongrui Chen, Xinnan Guo, Chaojie Wang, Jian Qiu, Guilin Qi, Meng Wang, Huiying Li
Compared to the larger pre-trained model and the tabular-specific pre-trained model, our approach is still competitive.
1 code implementation • 8 Sep 2021 • Yongrui Chen, Huiying Li, Yuncheng Hua, Guilin Qi
However, this candidate generation strategy ignores the structure of queries, resulting in a considerable number of noisy queries.
no code implementations • Findings (ACL) 2021 • Shirong Shen, Tongtong Wu, Guilin Qi, Yuan-Fang Li, Gholamreza Haffari, Sheng Bi
Event detection (ED) aims at detecting event trigger words in sentences and classifying them into specific event types.
no code implementations • 3 Mar 2021 • Waheed Ahmed Abro, Annalena Aicher, Niklas Rach, Stefan Ultes, Wolfgang Minker, Guilin Qi
Intent classifier model stacks BiLSTM with attention mechanism on top of the pre-trained BERT model and fine-tune the model for recognizing the user intent, whereas the argument similarity model employs BERT+BiLSTM for identifying system arguments the user refers to in his or her natural language utterances.
2 code implementations • 6 Jan 2021 • Tongtong Wu, Xuekai Li, Yuan-Fang Li, Reza Haffari, Guilin Qi, Yujin Zhu, Guoqiang Xu
We propose a novel curriculum-meta learning method to tackle the above two challenges in continual relation extraction.
no code implementations • COLING 2020 • Shirong Shen, Guilin Qi, Zhen Li, Sheng Bi, Lusheng Wang
We label a Chinese legal event dataset and evaluate our model on it.
1 code implementation • 29 Oct 2020 • Yuncheng Hua, Yuan-Fang Li, Gholamreza Haffari, Guilin Qi, Wei Wu
However, this comes at the cost of manually labeling similar questions to learn a retrieval model, which is tedious and expensive.
1 code implementation • EMNLP 2020 • Yuncheng Hua, Yuan-Fang Li, Gholamreza Haffari, Guilin Qi, Tongtong Wu
Our method achieves state-of-the-art performance on the CQA dataset (Saha et al., 2018) while using only five trial trajectories for the top-5 retrieved questions in each support set, and metatraining on tasks constructed from only 1% of the training set.
Knowledge Base Question Answering
Meta Reinforcement Learning
+3
1 code implementation • 29 Oct 2020 • Yuncheng Hua, Yuan-Fang Li, Guilin Qi, Wei Wu, Jingyao Zhang, Daiqing Qi
Our framework consists of a neural generator and a symbolic executor that, respectively, transforms a natural-language question into a sequence of primitive actions, and executes them over the knowledge base to compute the answer.
no code implementations • 7 Oct 2020 • Xiya Cheng, Sheng Bi, Guilin Qi, Yongzhen Wang
In this paper, we propose a knowledge-attentive neural network model, which introduces legal schematic knowledge about charges and exploit the knowledge hierarchical representation as the discriminative features to differentiate confusing charges.
no code implementations • COLING 2020 • Sheng Bi, Xiya Cheng, Yuan-Fang Li, Yongzhen Wang, Guilin Qi
Question generation over knowledge bases (KBQG) aims at generating natural-language questions about a subgraph, i. e. a set of (connected) triples.
1 code implementation • 13 Sep 2020 • Xinyue Zhang, Meng Wang, Muhammad Saleem, Axel-Cyrille Ngonga Ngomo, Guilin Qi, Haofen Wang
Based on Semantic Web technologies, knowledge graphs help users to discover information of interest by using live SPARQL services.
no code implementations • 9 Mar 2020 • Xianpei Han, Zhichun Wang, Jiangtao Zhang, Qinghua Wen, Wenqi Li, Buzhou Tang, Qi. Wang, Zhifan Feng, Yang Zhang, Yajuan Lu, Haitao Wang, Wenliang Chen, Hao Shao, Yubo Chen, Kang Liu, Jun Zhao, Taifeng Wang, Kezun Zhang, Meng Wang, Yinlin Jiang, Guilin Qi, Lei Zou, Sen Hu, Minhao Zhang, Yinnian Lin
Knowledge graph models world knowledge as concepts, entities, and the relationships between them, which has been widely used in many real-world tasks.
no code implementations • 15 Oct 2019 • Tianxing Wu, Arijit Khan, Melvin Yong, Guilin Qi, Meng Wang
Knowledge graph (KG) embedding encodes the entities and relations from a KG into low-dimensional vector spaces to support various applications such as KG completion, question answering, and recommender systems.
no code implementations • 9 Jan 2013 • Xiaowang Zhang, Kewen Wang, Zhe Wang, Yue Ma, Guilin Qi
DL-Lite is an important family of description logics.