1 code implementation • 5 Feb 2025 • Daniel Levy, Siba Smarak Panigrahi, Sékou-Oumar Kaba, Qiang Zhu, Kin Long Kelvin Lee, Mikhail Galkin, Santiago Miret, Siamak Ravanbakhsh
Generating novel crystalline materials has potential to lead to advancements in fields such as electronics, energy storage, and catalysis.
1 code implementation • 10 Oct 2024 • Kin Long Kelvin Lee, Mikhail Galkin, Santiago Miret
In this work, we report on a set of experiments using a simple equivariant graph convolution model on the QM9 dataset, focusing on correlating quantitative performance with the resulting molecular graph embeddings.
1 code implementation • 21 Sep 2024 • Krzysztof Olejniczak, Xingyue Huang, İsmail İlkan Ceylan, Mikhail Galkin
Traditional query answering over knowledge graphs -- or broadly over relational data -- is one of the most fundamental problems in data management.
3 code implementations • 14 Jun 2024 • Julia Gastinger, Shenyang Huang, Mikhail Galkin, Erfan Loghmani, Ali Parviz, Farimah Poursafaei, Jacob Danovitch, Emanuele Rossi, Ioannis Koutis, Heiner Stuckenschmidt, Reihaneh Rabbany, Guillaume Rabusseau
To address these challenges, we introduce Temporal Graph Benchmark 2. 0 (TGB 2. 0), a novel benchmarking framework tailored for evaluating methods for predicting future links on Temporal Knowledge Graphs and Temporal Heterogeneous Graphs with a focus on large-scale datasets, extending the Temporal Graph Benchmark.
1 code implementation • 30 May 2024 • Jianan Zhao, Zhaocheng Zhu, Mikhail Galkin, Hesham Mostafa, Michael Bronstein, Jian Tang
Many existing methods following the inductive setup can generalize to test graphs with new structures, but assuming the feature and label spaces remain the same as the training ones.
1 code implementation • 9 May 2024 • Uday Mallappa, Hesham Mostafa, Mikhail Galkin, Mariano Phielipp, Somdeb Majumdar
As novel machine learning (ML) approaches emerge to tackle such problems, there is a growing need for a modern benchmark that comprises a large training dataset and performance metrics that better reflect real-world constraints and objectives compared to existing benchmarks.
2 code implementations • 10 Apr 2024 • Mikhail Galkin, Jincheng Zhou, Bruno Ribeiro, Jian Tang, Zhaocheng Zhu
Complex logical query answering (CLQA) in knowledge graphs (KGs) goes beyond simple KG completion and aims at answering compositional queries comprised of multiple projections and logical operations.
1 code implementation • 3 Feb 2024 • Haitao Mao, Zhikai Chen, Wenzhuo Tang, Jianan Zhao, Yao Ma, Tong Zhao, Neil Shah, Mikhail Galkin, Jiliang Tang
Graph Foundation Models (GFMs) are emerging as a significant research topic in the graph domain, aiming to develop graph models trained on extensive and diverse data to enhance their applicability across various tasks and domains.
1 code implementation • 6 Oct 2023 • Mikhail Galkin, Xinyu Yuan, Hesham Mostafa, Jian Tang, Zhaocheng Zhu
The key challenge of designing foundation models on KGs is to learn such transferable representations that enable inference on any graph with arbitrary entity and relation vocabularies.
Ranked #1 on
Link Prediction
on CoDEx Medium
1 code implementation • 12 Sep 2023 • Kin Long Kelvin Lee, Carmelo Gonzales, Marcel Nassar, Matthew Spellings, Mikhail Galkin, Santiago Miret
We propose MatSci ML, a novel benchmark for modeling MATerials SCIence using Machine Learning (MatSci ML) methods focused on solid-state materials with periodic crystal structures.
no code implementations • 12 Aug 2023 • Michael Cochez, Dimitrios Alivanistos, Erik Arakelyan, Max Berrendorf, Daniel Daza, Mikhail Galkin, Pasquale Minervini, Mathias Niepert, Hongyu Ren
We will first provide an overview of the different query types which can be supported by these methods and datasets typically used for evaluation, as well as an insight into their limitations.
1 code implementation • 26 Mar 2023 • Hongyu Ren, Mikhail Galkin, Michael Cochez, Zhaocheng Zhu, Jure Leskovec
Extending the idea of graph databases (graph DBs), NGDB consists of a Neural Graph Storage and a Neural Graph Engine.
1 code implementation • 8 Feb 2023 • Luis Müller, Mikhail Galkin, Christopher Morris, Ladislav Rampášek
Recently, transformer architectures for graphs emerged as an alternative to established techniques for machine learning with graphs, such as (message-passing) graph neural networks.
1 code implementation • 30 Nov 2022 • Pablo Barcelo, Mikhail Galkin, Christopher Morris, Miguel Romero Orth
Namely, we investigate the limitations in the expressive power of the well-known Relational GCN and Compositional GCN architectures and shed some light on their practical learning performance.
1 code implementation • 13 Oct 2022 • Mikhail Galkin, Zhaocheng Zhu, Hongyu Ren, Jian Tang
Exploring the efficiency--effectiveness trade-off, we find the inductive relational structure representation method generally achieves higher performance, while the inductive node representation method is able to answer complex queries in the inference-only regime without any training on queries and scales to graphs of millions of nodes.
1 code implementation • 30 Sep 2022 • Phillip Schneider, Tim Schopf, Juraj Vladika, Mikhail Galkin, Elena Simperl, Florian Matthes
In pace with developments in the research field of artificial intelligence, knowledge graphs (KGs) have attracted a surge of interest from both academia and industry.
2 code implementations • 16 Jun 2022 • Vijay Prakash Dwivedi, Ladislav Rampášek, Mikhail Galkin, Ali Parviz, Guy Wolf, Anh Tuan Luu, Dominique Beaini
Graph Neural Networks (GNNs) that are based on the message passing (MP) paradigm generally exchange information between 1-hop neighbors to build node representations at each layer.
Ranked #3 on
Link Prediction
on PCQM-Contact
2 code implementations • NeurIPS 2023 • Zhaocheng Zhu, Xinyu Yuan, Mikhail Galkin, Sophie Xhonneux, Ming Zhang, Maxime Gazeau, Jian Tang
Experiments on both transductive and inductive knowledge graph reasoning benchmarks show that A*Net achieves competitive performance with existing state-of-the-art path-based methods, while merely visiting 10% nodes and 10% edges at each iteration.
Ranked #10 on
Link Property Prediction
on ogbl-wikikg2
4 code implementations • 25 May 2022 • Ladislav Rampášek, Mikhail Galkin, Vijay Prakash Dwivedi, Anh Tuan Luu, Guy Wolf, Dominique Beaini
We propose a recipe on how to build a general, powerful, scalable (GPS) graph Transformer with linear complexity and state-of-the-art results on a diverse set of benchmarks.
Ranked #1 on
Graph Property Prediction
on ogbg-ppa
1 code implementation • ICML 2022 • Zhaocheng Zhu, Mikhail Galkin, Zuobai Zhang, Jian Tang
Answering complex first-order logic (FOL) queries on knowledge graphs is a fundamental task for multi-hop reasoning.
Ranked #3 on
Complex Query Answering
on FB15k-237
2 code implementations • 14 Mar 2022 • Charles Tapley Hoyt, Max Berrendorf, Mikhail Galkin, Volker Tresp, Benjamin M. Gyori
The link prediction task on knowledge graphs without explicit negative triples in the training data motivates the usage of rank-based metrics.
1 code implementation • 3 Mar 2022 • Mikhail Galkin, Max Berrendorf, Charles Tapley Hoyt
An emerging trend in representation learning over knowledge graphs (KGs) moves beyond transductive link prediction tasks over a fixed set of known entities in favor of inductive tasks that imply training on one graph and performing inference over a new graph with unseen entities.
Ranked #1 on
Inductive Link Prediction
on ILPC22-Small
no code implementations • 21 Aug 2021 • Boris Shirokikh, Alexandra Dalechina, Alexey Shevtsov, Egor Krivov, Valery Kostjuchenko, Amayak Durgaryan, Mikhail Galkin, Andrey Golanov, Mikhail Belyaev
We show that the segmentation model reduces the ratio of detection disagreements from 0. 162 to 0. 085 (p < 0. 05).
2 code implementations • 10 Jul 2021 • Mehdi Ali, Max Berrendorf, Mikhail Galkin, Veronika Thost, Tengfei Ma, Volker Tresp, Jens Lehmann
In this work, we classify different inductive settings and study the benefits of employing hyper-relational KGs on a wide range of semi- and fully inductive link prediction tasks powered by recent advancements in graph neural networks.
4 code implementations • ICLR 2022 • Mikhail Galkin, Etienne Denis, Jiapeng Wu, William L. Hamilton
To this end, we propose NodePiece, an anchor-based approach to learn a fixed-size entity vocabulary.
Ranked #18 on
Link Property Prediction
on ogbl-wikikg2
1 code implementation • ICLR 2022 • Dimitrios Alivanistos, Max Berrendorf, Michael Cochez, Mikhail Galkin
Besides that, we propose a method to answer such queries and demonstrate in our experiments that qualifiers improve query answering on a diverse set of query patterns.
1 code implementation • EMNLP 2020 • Mikhail Galkin, Priyansh Trivedi, Gaurav Maheshwari, Ricardo Usbeck, Jens Lehmann
We also demonstrate that existing benchmarks for evaluating link prediction (LP) performance on hyper-relational KGs suffer from fundamental flaws and thus develop a new Wikidata-based dataset - WD50K.
Ranked #2 on
Link Prediction
on JF17K
1 code implementation • 3 Jul 2020 • Maria Khvalchik, Mikhail Galkin
Pre-training large-scale language models (LMs) requires huge amounts of text corpora.
2 code implementations • 23 Jun 2020 • Mehdi Ali, Max Berrendorf, Charles Tapley Hoyt, Laurent Vermue, Mikhail Galkin, Sahand Sharifzadeh, Asja Fischer, Volker Tresp, Jens Lehmann
The heterogeneity in recently published knowledge graph embedding models' implementations, training, and evaluation has made fair and thorough comparisons difficult.
no code implementations • 6 Sep 2019 • Boris Shirokikh, Alexandra Dalechina, Alexey Shevtsov, Egor Krivov, Valery Kostjuchenko, Amayak Durgaryan, Mikhail Galkin, Ivan Osinov, Andrey Golanov, Mikhail Belyaev
Stereotactic radiosurgery is a minimally-invasive treatment option for a large number of patients with intracranial tumors.