Search Results for author: Guilin Qi

Found 31 papers, 17 papers with code

Event Causality Identification via Derivative Prompt Joint Learning

1 code implementation COLING 2022 Shirong Shen, Heng Zhou, Tongtong Wu, Guilin Qi

This paper studies event causality identification, which aims at predicting the causality relation for a pair of events in a sentence.

Event Causality Identification Language Modelling

Ontology Revision based on Pre-trained Language Models

no code implementations27 Oct 2023 Qiu Ji, Guilin Qi, Yuxin Ye, Jiaye Li, Site Li, Jianjie Ren, Songtao Lu

We conduct experiments over 19 ontology pairs and compare our algorithms and scoring functions with existing ones.

Retrieve-Rewrite-Answer: A KG-to-Text Enhanced LLMs Framework for Knowledge Graph Question Answering

1 code implementation20 Sep 2023 Yike Wu, Nan Hu, Sheng Bi, Guilin Qi, Jie Ren, Anhuan Xie, Wei Song

To this end, we propose an answer-sensitive KG-to-Text approach that can transform KG knowledge into well-textualized statements most informative for KGQA.

Graph Question Answering Language Modelling +2

An Embedding-based Approach to Inconsistency-tolerant Reasoning with Inconsistent Ontologies

1 code implementation4 Apr 2023 Keyu Wang, Site Li, Jiaye Li, Guilin Qi, Qiu Ji

A natural way to reason with an inconsistent ontology is to utilize the maximal consistent subsets of the ontology.


An Empirical Study of Pre-trained Language Models in Simple Knowledge Graph Question Answering

1 code implementation18 Mar 2023 Nan Hu, Yike Wu, Guilin Qi, Dehai Min, Jiaoyan Chen, Jeff Z. Pan, Zafar Ali

Large-scale pre-trained language models (PLMs) such as BERT have recently achieved great success and become a milestone in natural language processing (NLP).

Graph Question Answering Knowledge Distillation +1

Can ChatGPT Replace Traditional KBQA Models? An In-depth Analysis of the Question Answering Performance of the GPT LLM Family

2 code implementations14 Mar 2023 Yiming Tan, Dehai Min, Yu Li, Wenbo Li, Nan Hu, Yongrui Chen, Guilin Qi

ChatGPT is a powerful large language model (LLM) that covers knowledge resources such as Wikipedia and supports natural language question answering using its own knowledge.

Language Modelling Large Language Model +3

Learn from Yesterday: A Semi-Supervised Continual Learning Method for Supervision-Limited Text-to-SQL Task Streams

1 code implementation21 Nov 2022 Yongrui Chen, Xinnan Guo, Tongtong Wu, Guilin Qi, Yang Li, Yang Dong

The first solution Vanilla is to perform self-training, augmenting the supervised training data with predicted pseudo-labeled instances of the current task, while replacing the full volume retraining with episodic memory replay to balance the training efficiency with the performance of previous tasks.

Continual Learning Text-To-SQL

Towards Relation Extraction From Speech

1 code implementation17 Oct 2022 Tongtong Wu, Guitao Wang, Jinming Zhao, Zhaoran Liu, Guilin Qi, Yuan-Fang Li, Gholamreza Haffari

We explore speech relation extraction via two approaches: the pipeline approach conducting text-based extraction with a pretrained ASR module, and the end2end approach via a new proposed encoder-decoder model, or what we called SpeechRE.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Neural Topic Modeling with Deep Mutual Information Estimation

no code implementations12 Mar 2022 Kang Xu, Xiaoqiu Lu, Yuan-Fang Li, Tongtong Wu, Guilin Qi, Ning Ye, Dong Wang, Zheng Zhou

NTM-DMIE is a neural network method for topic learning which maximizes the mutual information between the input documents and their latent topic representation.

Mutual Information Estimation Text Clustering +1

Conditional Generation Net for Medication Recommendation

1 code implementation14 Feb 2022 Rui Wu, Zhaopeng Qiu, Jiacheng Jiang, Guilin Qi, Xian Wu

Medication recommendation targets to provide a proper set of medicines according to patients' diagnoses, which is a critical task in clinics.

Multi-Label Classification

Active Learning for Event Extraction with Memory-based Loss Prediction Model

no code implementations26 Nov 2021 Shirong Shen, Zhen Li, Guilin Qi

During the selection process, we use an internal-external sample loss ranking method to evaluate the sample importance by using local information.

Active Learning Event Extraction

Pretrained Language Model in Continual Learning: A Comparative Study

no code implementations ICLR 2022 Tongtong Wu, Massimo Caccia, Zhuang Li, Yuan-Fang Li, Guilin Qi, Gholamreza Haffari

In this paper, we thoroughly compare the continual learning performance over the combination of 5 PLMs and 4 veins of CL methods on 3 benchmarks in 2 typical incremental settings.

Continual Learning Language Modelling

Leveraging Table Content for Zero-shot Text-to-SQL with Meta-Learning

1 code implementation12 Sep 2021 Yongrui Chen, Xinnan Guo, Chaojie Wang, Jian Qiu, Guilin Qi, Meng Wang, Huiying Li

Compared to the larger pre-trained model and the tabular-specific pre-trained model, our approach is still competitive.

Meta-Learning Text-To-SQL

Formal Query Building with Query Structure Prediction for Complex Question Answering over Knowledge Base

1 code implementation8 Sep 2021 Yongrui Chen, Huiying Li, Yuncheng Hua, Guilin Qi

However, this candidate generation strategy ignores the structure of queries, resulting in a considerable number of noisy queries.

Graph Generation Question Answering

Natural Language Understanding for Argumentative Dialogue Systems in the Opinion Building Domain

no code implementations3 Mar 2021 Waheed Ahmed Abro, Annalena Aicher, Niklas Rach, Stefan Ultes, Wolfgang Minker, Guilin Qi

Intent classifier model stacks BiLSTM with attention mechanism on top of the pre-trained BERT model and fine-tune the model for recognizing the user intent, whereas the argument similarity model employs BERT+BiLSTM for identifying system arguments the user refers to in his or her natural language utterances.

Natural Language Understanding STS

Few-Shot Complex Knowledge Base Question Answering via Meta Reinforcement Learning

1 code implementation EMNLP 2020 Yuncheng Hua, Yuan-Fang Li, Gholamreza Haffari, Guilin Qi, Tongtong Wu

Our method achieves state-of-the-art performance on the CQA dataset (Saha et al., 2018) while using only five trial trajectories for the top-5 retrieved questions in each support set, and metatraining on tasks constructed from only 1% of the training set.

Knowledge Base Question Answering Meta Reinforcement Learning +3

Less is More: Data-Efficient Complex Question Answering over Knowledge Bases

1 code implementation29 Oct 2020 Yuncheng Hua, Yuan-Fang Li, Guilin Qi, Wei Wu, Jingyao Zhang, Daiqing Qi

Our framework consists of a neural generator and a symbolic executor that, respectively, transforms a natural-language question into a sequence of primitive actions, and executes them over the knowledge base to compute the answer.

Multi-hop Question Answering Question Answering

Knowledge-aware Method for Confusing Charge Prediction

no code implementations7 Oct 2020 Xiya Cheng, Sheng Bi, Guilin Qi, Yongzhen Wang

In this paper, we propose a knowledge-attentive neural network model, which introduces legal schematic knowledge about charges and exploit the knowledge hierarchical representation as the discriminative features to differentiate confusing charges.

Knowledge-enriched, Type-constrained and Grammar-guided Question Generation over Knowledge Bases

no code implementations COLING 2020 Sheng Bi, Xiya Cheng, Yuan-Fang Li, Yongzhen Wang, Guilin Qi

Question generation over knowledge bases (KBQG) aims at generating natural-language questions about a subgraph, i. e. a set of (connected) triples.

Question Generation Question-Generation

Revealing Secrets in SPARQL Session Level

1 code implementation13 Sep 2020 Xinyue Zhang, Meng Wang, Muhammad Saleem, Axel-Cyrille Ngonga Ngomo, Guilin Qi, Haofen Wang

Based on Semantic Web technologies, knowledge graphs help users to discover information of interest by using live SPARQL services.

Knowledge Graphs

Efficiently Embedding Dynamic Knowledge Graphs

no code implementations15 Oct 2019 Tianxing Wu, Arijit Khan, Melvin Yong, Guilin Qi, Meng Wang

Knowledge graph (KG) embedding encodes the entities and relations from a KG into low-dimensional vector spaces to support various applications such as KG completion, question answering, and recommender systems.

Knowledge Graph Embedding Knowledge Graphs +4

Cannot find the paper you are looking for? You can Submit a new open access paper.