Search Results for author: Thomas Gschwind

Found 3 papers, 0 papers with code

Attention-based Interpretability with Concept Transformers

no code implementations ICLR 2022 Mattia Rigotti, Christoph Miksovic, Ioana Giurgiu, Thomas Gschwind, Paolo Scotton

In particular, we design the Concept Transformer, a deep learning module that exposes explanations of the output of a model in which it is embedded in terms of attention over user-defined high-level concepts.

Knowledge Graph Embedding using Graph Convolutional Networks with Relation-Aware Attention

no code implementations14 Feb 2021 Nasrullah Sheikh, Xiao Qin, Berthold Reinwald, Christoph Miksovic, Thomas Gschwind, Paolo Scotton

Knowledge graph embedding methods learn embeddings of entities and relations in a low dimensional space which can be used for various downstream machine learning tasks such as link prediction and entity matching.

Graph Attention Knowledge Graph Embedding +2

Cannot find the paper you are looking for? You can Submit a new open access paper.