This paper studies event causality identification, which aims at predicting the causality relation for a pair of events in a sentence.
Firstly we retrieve the relevant embedding from the knowledge graph by utilizing group relations in metadata and then integrate it with other modalities.
To this end, we propose an answer-sensitive KG-to-Text approach that can transform KG knowledge into well-textualized statements most informative for KGQA.
A natural way to reason with an inconsistent ontology is to utilize the maximal consistent subsets of the ontology.
Large-scale pre-trained language models (PLMs) such as BERT have recently achieved great success and become a milestone in natural language processing (NLP).
ChatGPT is a powerful large language model (LLM) that covers knowledge resources such as Wikipedia and supports natural language question answering using its own knowledge.
Ranked #1 on Question Answering on GraphQuestions
The first solution Vanilla is to perform self-training, augmenting the supervised training data with predicted pseudo-labeled instances of the current task, while replacing the full volume retraining with episodic memory replay to balance the training efficiency with the performance of previous tasks.
We explore speech relation extraction via two approaches: the pipeline approach conducting text-based extraction with a pretrained ASR module, and the end2end approach via a new proposed encoder-decoder model, or what we called SpeechRE.
NTM-DMIE is a neural network method for topic learning which maximizes the mutual information between the input documents and their latent topic representation.
Medication recommendation targets to provide a proper set of medicines according to patients' diagnoses, which is a critical task in clinics.
During the selection process, we use an internal-external sample loss ranking method to evaluate the sample importance by using local information.
The high-level decoding generates an AQG as a constraint to prune the search space and reduce the locally ambiguous query graph.
The ability to generate natural-language questions with controlled complexity levels is highly desirable as it further expands the applicability of question generation.
In this paper, we thoroughly compare the continual learning performance over the combination of 5 PLMs and 4 veins of CL methods on 3 benchmarks in 2 typical incremental settings.
Compared to the larger pre-trained model and the tabular-specific pre-trained model, our approach is still competitive.
However, this candidate generation strategy ignores the structure of queries, resulting in a considerable number of noisy queries.
Event detection (ED) aims at detecting event trigger words in sentences and classifying them into specific event types.
Intent classifier model stacks BiLSTM with attention mechanism on top of the pre-trained BERT model and fine-tune the model for recognizing the user intent, whereas the argument similarity model employs BERT+BiLSTM for identifying system arguments the user refers to in his or her natural language utterances.
We propose a novel curriculum-meta learning method to tackle the above two challenges in continual relation extraction.
However, this comes at the cost of manually labeling similar questions to learn a retrieval model, which is tedious and expensive.
Our method achieves state-of-the-art performance on the CQA dataset (Saha et al., 2018) while using only five trial trajectories for the top-5 retrieved questions in each support set, and metatraining on tasks constructed from only 1% of the training set.
Our framework consists of a neural generator and a symbolic executor that, respectively, transforms a natural-language question into a sequence of primitive actions, and executes them over the knowledge base to compute the answer.
In this paper, we propose a knowledge-attentive neural network model, which introduces legal schematic knowledge about charges and exploit the knowledge hierarchical representation as the discriminative features to differentiate confusing charges.
Question generation over knowledge bases (KBQG) aims at generating natural-language questions about a subgraph, i. e. a set of (connected) triples.
Based on Semantic Web technologies, knowledge graphs help users to discover information of interest by using live SPARQL services.
no code implementations • 9 Mar 2020 • Xianpei Han, Zhichun Wang, Jiangtao Zhang, Qinghua Wen, Wenqi Li, Buzhou Tang, Qi. Wang, Zhifan Feng, Yang Zhang, Yajuan Lu, Haitao Wang, Wenliang Chen, Hao Shao, Yubo Chen, Kang Liu, Jun Zhao, Taifeng Wang, Kezun Zhang, Meng Wang, Yinlin Jiang, Guilin Qi, Lei Zou, Sen Hu, Minhao Zhang, Yinnian Lin
Knowledge graph models world knowledge as concepts, entities, and the relationships between them, which has been widely used in many real-world tasks.
Knowledge graph (KG) embedding encodes the entities and relations from a KG into low-dimensional vector spaces to support various applications such as KG completion, question answering, and recommender systems.