In this work, we classify different inductive settings and study the benefits of employing hyper-relational KGs on a wide range of semi- and fully inductive link prediction tasks powered by recent advancements in graph neural networks.
Sampling is an established technique to scale graph neural networks to large graphs.
Unlike vision tasks, the data augmentation method for contrastive learning has not been investigated sufficiently in language tasks.
Graph neural networks have been demonstrated powerful in graph representation learning, however, they rely heavily on the completeness of graph information.
The proposed MGMN model consists of a node-graph matching network for effectively learning cross-level interactions between nodes of a graph and the other whole graph, and a siamese graph neural network to learn global-level interactions between two graphs.
We propose a novel GNN defense algorithm against structural attacks that maliciously modify graph topology.
Two case studies and interviews with domain experts demonstrate the effectiveness of GNNLens in facilitating the understanding of GNN models and their errors.
Besides, we exploit the unknown latent interaction between the same type of nodes (workers or tasks) by adding a homogeneous attention layer in the graph neural networks.
To this end, we first represent both natural language query texts and programming language code snippets with the unified graph-structured data, and then use the proposed graph matching and searching model to retrieve the best matching code snippet.
Experiments on Newsroom and CNN/Daily Mail demonstrate that our new evaluation method outperforms other metrics even without reference summaries.
In particular, the proposed MGMN consists of a node-graph matching network for effectively learning cross-level interactions between each node of one graph and the other whole graph, and a siamese graph neural network to learn global-level interactions between two input graphs.
There is a growing interest in applying deep learning (DL) to healthcare, driven by the availability of data with multiple feature channels in rich-data environments (e. g., intensive care units).
While this study, by no means recommends specific drugs, it demonstrates a powerful deep learning methodology to prioritize existing drugs for further investigation, which holds the potential of accelerating therapeutic development for COVID-19.
Both the coarsening matrix and the transport cost matrix are parameterized, so that an optimal coarsening strategy can be learned and tailored for a given set of graphs.
Pre-trained BERT contextualized representations have achieved state-of-the-art results on multiple downstream NLP tasks by fine-tuning with task-specific data.
Semantic parsing is a fundamental problem in natural language understanding, as it involves the mapping of natural language to structured forms such as executable queries or logic-like knowledge representations.
G-BERT is the first to bring the language model pre-training schema into the healthcare domain and it achieved state-of-the-art performance on the medication recommendation task.
Electrocardiography (ECG) signals are commonly used to diagnose various cardiac abnormalities.
Benchmark data sets are an indispensable ingredient of the evaluation of graph-based machine learning methods.
Ranked #2 on Graph Classification on IPC-lifted
By integrating the conditional random fields (CRF) in the graph convolutional networks, we explicitly model a joint probability of the entire set of node labels, thus taking advantage of neighborhood label information in the node label prediction task.
Existing approaches typically resort to node embeddings and use a recurrent neural network (RNN, broadly speaking) to regulate the embeddings and learn the temporal dynamics.
We also demonstrate that when malicious training participants tend to implant backdoors during model training, CALTRAIN can accurately and precisely discover the poisoned and mislabeled training data that lead to the runtime mispredictions.
Organized crime inflicts human suffering on a genocidal scale: the Mexican drug cartels have murdered 150, 000 people since 2006, upwards of 700, 000 people per year are "exported" in a human trafficking industry enslaving an estimated 40 million people.
It refers to the directional relation between text fragments such that the "premise" can infer "hypothesis".
We focus on the matrix representation of graphs and formulate penalty terms that regularize the output distribution of the decoder to encourage the satisfaction of validity constraints.
In many situations, we need to build and deploy separate models in related environments with different data qualities.
Recent progress in deep learning is revolutionizing the healthcare domain including providing solutions to medication recommendations, especially recommending medication combination for patients with complex health conditions.
The graph convolutional networks (GCN) recently proposed by Kipf and Welling are an effective graph model for semi-supervised learning.
Ranked #4 on Node Classification on Citeseer Full-supervised
Crosslingual word embeddings represent lexical items from different languages using the same vector space, enabling crosslingual transfer.
In this paper, we try to address two challenges of applying topic models to lexicon extraction in non-parallel data: 1) hard to model the word relationship and 2) noisy seed dictionary.
Crosslingual word embeddings represent lexical items from different languages in the same vector space, enabling transfer of NLP tools.