1 code implementation • 25 May 2022 • Ladislav Rampášek, Mikhail Galkin, Vijay Prakash Dwivedi, Anh Tuan Luu, Guy Wolf, Dominique Beaini
We propose a recipe on how to build a general, powerful, scalable (GPS) graph Transformer with linear complexity and state-of-the-art results on a diverse set of benchmarks.
Ranked #1 on
Graph Regression
on ZINC-500k
no code implementations • 16 May 2022 • Zhaocheng Zhu, Mikhail Galkin, Zuobai Zhang, Jian Tang
Answering complex first-order logic (FOL) queries on knowledge graphs is a fundamental task for multi-hop reasoning.
2 code implementations • 14 Mar 2022 • Charles Tapley Hoyt, Max Berrendorf, Mikhail Galkin, Volker Tresp, Benjamin M. Gyori
The link prediction task on knowledge graphs without explicit negative triples in the training data motivates the usage of rank-based metrics.
1 code implementation • 3 Mar 2022 • Mikhail Galkin, Max Berrendorf, Charles Tapley Hoyt
An emerging trend in representation learning over knowledge graphs (KGs) moves beyond transductive link prediction tasks over a fixed set of known entities in favor of inductive tasks that imply training on one graph and performing inference over a new graph with unseen entities.
Ranked #1 on
Inductive Link Prediction
on ILPC22-Small
no code implementations • 21 Aug 2021 • Boris Shirokikh, Alexandra Dalechina, Alexey Shevtsov, Egor Krivov, Valery Kostjuchenko, Amayak Durgaryan, Mikhail Galkin, Andrey Golanov, Mikhail Belyaev
We show that the segmentation model reduces the ratio of detection disagreements from 0. 162 to 0. 085 (p < 0. 05).
1 code implementation • 10 Jul 2021 • Mehdi Ali, Max Berrendorf, Mikhail Galkin, Veronika Thost, Tengfei Ma, Volker Tresp, Jens Lehmann
In this work, we classify different inductive settings and study the benefits of employing hyper-relational KGs on a wide range of semi- and fully inductive link prediction tasks powered by recent advancements in graph neural networks.
3 code implementations • ICLR 2022 • Mikhail Galkin, Etienne Denis, Jiapeng Wu, William L. Hamilton
To this end, we propose NodePiece, an anchor-based approach to learn a fixed-size entity vocabulary.
Ranked #8 on
Link Property Prediction
on ogbl-wikikg2
1 code implementation • ICLR 2022 • Dimitrios Alivanistos, Max Berrendorf, Michael Cochez, Mikhail Galkin
Besides that, we propose a method to answer such queries and demonstrate in our experiments that qualifiers improve query answering on a diverse set of query patterns.
1 code implementation • EMNLP 2020 • Mikhail Galkin, Priyansh Trivedi, Gaurav Maheshwari, Ricardo Usbeck, Jens Lehmann
We also demonstrate that existing benchmarks for evaluating link prediction (LP) performance on hyper-relational KGs suffer from fundamental flaws and thus develop a new Wikidata-based dataset - WD50K.
Ranked #1 on
Link Prediction
on WD50K
1 code implementation • 3 Jul 2020 • Maria Khvalchik, Mikhail Galkin
Pre-training large-scale language models (LMs) requires huge amounts of text corpora.
2 code implementations • 23 Jun 2020 • Mehdi Ali, Max Berrendorf, Charles Tapley Hoyt, Laurent Vermue, Mikhail Galkin, Sahand Sharifzadeh, Asja Fischer, Volker Tresp, Jens Lehmann
The heterogeneity in recently published knowledge graph embedding models' implementations, training, and evaluation has made fair and thorough comparisons difficult.
no code implementations • 6 Sep 2019 • Boris Shirokikh, Alexandra Dalechina, Alexey Shevtsov, Egor Krivov, Valery Kostjuchenko, Amayak Durgaryan, Mikhail Galkin, Ivan Osinov, Andrey Golanov, Mikhail Belyaev
Stereotactic radiosurgery is a minimally-invasive treatment option for a large number of patients with intracranial tumors.