In addition, a novel graph transition layer is applied to capture the transitions on the dynamic graph, i. e., edge formation and dissolution.
In this paper, we tackle the temporal knowledge graph completion task by proposing TempCaps, which is a Capsule network-based embedding model for Temporal knowledge graph completion.
Various temporal knowledge graph (KG) completion models have been proposed in the recent literature.
no code implementations • 19 Mar 2022 • Suprosanna Shit, Rajat Koner, Bastian Wittmann, Johannes Paetzold, Ivan Ezhov, Hongwei Li, Jiazhen Pan, Sahand Sharifzadeh, Georgios Kaissis, Volker Tresp, Bjoern Menze
We leverage direct set-based object prediction and incorporate the interaction among the objects to learn an object-relation representation jointly.
We align structured knowledge contained in temporal knowledge graphs with their textual descriptions extracted from news articles and propose a novel knowledge-text prediction task to inject the abundant information from descriptions into temporal knowledge embeddings.
The link prediction task on knowledge graphs without explicit negative triples in the training data motivates the usage of rank-based metrics.
Conventional static knowledge graphs model entities in relational data as nodes, connected by edges of specific relation types.
The high transferability achieved by our method shows that, in contrast to the observations in previous work, adversarial examples on a segmentation model can be easy to transfer to other segmentation models.
High-quality Web tables are rich sources of information that can be used to populate Knowledge Graphs (KG).
We define the novel problem of Data-Free Domain Generalization (DFDG), a practical setting where models trained on the source domains separately are available instead of the original datasets, and investigate how to effectively solve the domain generalization problem in that case.
Based on extensive qualitative and quantitative experiments, we discover that ViT's stronger robustness to natural corrupted patches and higher vulnerability against adversarial patches are both caused by the attention mechanism.
In our model, perception, episodic memory, and semantic memory are realized by different functional and operational modes of the oscillating interactions between an index layer and a representation layer in a bilayer tensor network (BTN).
ICD-9 coding is a relevant clinical billing task, where unstructured texts with information about a patient's diagnosis and treatments are annotated with multiple ICD-9 codes.
Machine learning models that can generalize to unseen domains are essential when applied in real-world scenarios involving strong domain shifts.
Recently, researchers have attempted to apply GANs to missing data generation and imputation for EHR data: a major challenge here is the categorical nature of the data.
Recurrent neural network based solutions are increasingly being used in the analysis of longitudinal Electronic Health Record data.
A serious problem in image classification is that a trained model might perform well for input data that originates from the same distribution as the data available for model training, but performs much worse for out-of-distribution (OOD) samples.
We conduct an experimental study on the challenging dataset GQA, based on both manually curated and automatically generated scene graphs.
Identifying objects in an image and their mutual relationships as a scene graph leads to a deep understanding of image content.
In this work, we classify different inductive settings and study the benefits of employing hyper-relational KGs on a wide range of semi- and fully inductive link prediction tasks powered by recent advancements in graph neural networks.
We propose an uncertainty-aware deep kernel learning model which permits the estimation of the uncertainty in the prediction by a pipeline of a Convolutional Neural Network and a sparse Gaussian Process.
The examination reveals five major new/different components in CapsNet: a transformation process, a dynamic routing layer, a squashing function, a marginal loss other than cross-entropy loss, and an additional class-conditional reconstruction loss for regularization.
Biomedical knowledge graphs permit an integrative computational approach to reasoning about biological systems.
Malicious software (malware) poses an increasing threat to the security of communication systems as the number of interconnected mobile devices increases exponentially.
As alternatives to CNNs, the recently proposed Capsule Networks (CapsNets) are shown to be more robust to white-box attacks than CNNs under popular attack protocols.
We show that by fine-tuning the classification pipeline with the extracted knowledge from texts, we can achieve ~8x more accurate results in scene graph classification, ~3x in object classification, and ~1. 5x in predicate classification, compared to the supervised baselines with only 1% of the annotated images.
However, most of the existing models for temporal knowledge graph forecasting use Recurrent Neural Network (RNN) with discrete depth to capture temporal information, while time is a continuous variable.
We evaluate our model on four benchmark temporal knowledge graphs for the link forecasting task.
We evaluate our model on four benchmark temporal knowledge graphs for the link forecasting task.
A major challenge in scene graph classification is that the appearance of objects and relations can be significantly different from one image to another.
Product manifolds enable our approach to better reflect a wide variety of geometric structures on temporal KGs.
In our work, we aim to design an emotional line for each character that considers multiple emotions common in psychological theories, with the goal of generating stories with richer emotional changes in the characters.
Motivated by the conclusion, we propose an implementation of introspective learning by distilling knowledge from online self-explanations.
Although continual learning and anomaly detection have separately been well-studied in previous works, their intersection remains rather unexplored.
Recently, knowledge graph embeddings (KGEs) received significant attention, and several software libraries have been developed for training and evaluating KGEs.
Ranked #1 on Link Prediction on WN18 (training time (s) metric)
The graph structure of biomedical data differs from those in typical knowledge graph benchmark tasks.
Our experiments on eight datasets from the image and time-series domains show that our method leads to better results than classical OCC and few-shot classification approaches, and demonstrate the ability to learn unseen tasks from only few normal class samples.
We propose a novel method that approaches the task by performing context-driven, sequential reasoning based on the objects and their semantic and spatial relationships present in the scene.
Randomized controlled trials typically analyze the effectiveness of treatments with the goal of making treatment recommendations for patient subgroups.
The heterogeneity in recently published knowledge graph embedding models' implementations, training, and evaluation has made fair and thorough comparisons difficult.
The Hawkes process has become a standard method for modeling self-exciting event sequences with different event types.
After deriving causal effect estimators, we further study intervention policy improvement on the graph under capacity constraint.
In this work, we take a closer look at the evaluation of two families of methods for enriching information from knowledge graphs: Link Prediction and Entity Alignment.
In reinforcement learning, an agent learns to reach a set of goals by means of an external reward signal.
In particular, we propose that explicit perception and declarative memories require a semantic decoder, which, in a simple realization, is based on four layers: First, a sensory memory layer, as a buffer for sensory input, second, an index layer representing concepts, third, a memoryless representation layer for the broadcasting of information ---the "blackboard", or the "canvas" of the brain--- and fourth, a working memory layer as a processing center and data buffer.
The underlying idea is to frame the task of triple classification as a debate game between two reinforcement learning agents which extract arguments -- paths in the knowledge graph -- with the goal to justify the fact being true (thesis) or the fact being false (antithesis), respectively.
We simplify the problem by making a plausible assumption that the tensor representation of a knowledge graph can be approximated by its low-rank tensor singular value decomposition, which is verified by our experiments.
The main idea is to frame the task of triple classification as a debate game between two reinforcement learning agents which extract arguments -- paths in the knowledge graph -- with the goal to promote the fact being true (thesis) or the fact being false (antithesis), respectively.
In this work, we focus on the problem of entity alignment in Knowledge Graphs (KG) and we report on our experiences when applying a Graph Convolutional Network (GCN) based model for this task.
Ranked #25 on Entity Alignment on DBP15k zh-en
We believe that the findings in this paper are directly applicable and will lead to improvements in models that rely on CAEs.
We cast the problem of image denoising as a domain translation problem between high and low noise domains.
Our results show that the mutual information between the context states and the states of interest can be an effective ingredient for overcoming challenges in robotic manipulation tasks with sparse rewards.
This objective encourages the agent to maximize the expected return, as well as to achieve more diverse goals.
We argue that depth maps can additionally provide valuable information on object relations, e. g. helping to detect not only spatial relations, such as standing behind, but also non-spatial relations, such as holding.
To this end, we present the first approach to unsupervised text generation from KGs and show simultaneously how it can be used for unsupervised semantic parsing.
Ranked #1 on Unsupervised KG-to-Text Generation on VG graph-text
In Reinforcement Learning (RL), an agent explores the environment and collects trajectories into the memory buffer for later learning.
In this work, we propose the first quantum Ans\"atze for the statistical relational learning on knowledge graphs using parametric quantum circuits.
This paper is concerned with the training of recurrent neural networks as goal-oriented dialog agents using reinforcement learning.
In this paper we consider scene descriptions which are represented as a set of triples (subject, predicate, object), where each triple consists of a pair of visual objects, which appear in the image, and the relationship between them (e. g. man-riding-elephant, man-wearing-hat).
Many applications require an understanding of an image that goes beyond the simple detection and classification of its objects.
Learning goal-oriented dialogues by means of deep reinforcement learning has recently become a popular research topic.
In the adversarial process of training CorGAN, the Generator is supposed to generate outlier samples for negative class, and the Discriminator as an one-class classifier is trained to distinguish data from training datasets (i. e. positive class) and generated data from the Generator (i. e. negative class).
With the rising number of interconnected devices and sensors, modeling distributed sensor networks is of increasing interest.
The decomposition of sparse tensors has successfully been used in relational learning, e. g., the modeling of large knowledge graphs.
We show how episodic memory and semantic memory can be realized and discuss how new memory traces can be generated from sensory input: Existing memories are the basis for perception and new memories are generated via perception.
The Recurrent Neural Networks and their variants have shown promising performances in sequence modeling tasks such as Natural Language Processing.
We also address the problem of correlation in target features: Often a physician is required to make multiple (sub-)decisions in a block, and that these decisions are mutually dependent.
In this work we present an approach based on RNNs, specifically designed for the clinical domain, that combines static and dynamic information in order to predict future events.
By predicting future events, we also predict likely changes in the knowledge graph and thus obtain a model for the evolution of the knowledge graph as well.
We introduce a number of hypotheses on human memory that can be derived from the developed mathematical models.
Latent variable models have increasingly gained attention for the statistical modeling of knowledge graphs, showing promising results in tasks related to knowledge graph completion and cleaning.
In this paper, we provide a review of how such statistical models can be "trained" on large knowledge graphs, and then used to predict new facts about the world (which is equivalent to predicting new edges in the graph).
no code implementations • 17 Nov 2013 • Volker Tresp, Sonja Zillner, Maria J. Costa, Yi Huang, Alexander Cavallaro, Peter A. Fasching, Andre Reis, Martin Sedlmayr, Thomas Ganslandt, Klemens Budde, Carl Hinrichs, Danilo Schmidt, Philipp Daumke, Daniel Sonntag, Thomas Wittenberg, Patricia G. Oppelt, Denis Krompass
We argue that a science of a Clinical Data Intelligence is sensible in the context of a Big Data analysis, i. e., with data from many patients and with complete patient information.