Graph Property Prediction

23 papers with code • 4 benchmarks • 2 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

How Attentive are Graph Attention Networks?

tech-srl/how_attentive_are_gats ICLR 2022

Because GATs use a static attention mechanism, there are simple graph problems that GAT cannot express: in a controlled problem, we show that static attention hinders GAT from even fitting the training data.

Large-scale Robust Deep AUC Maximization: A New Surrogate Loss and Empirical Studies on Medical Image Classification

Optimization-AI/LibAUC ICCV 2021

Our studies demonstrate that the proposed DAM method improves the performance of optimizing cross-entropy loss by a large margin, and also achieves better performance than optimizing the existing AUC square loss on these medical image classification tasks.

Do Transformers Really Perform Bad for Graph Representation?

Microsoft/Graphormer 9 Jun 2021

Our key insight to utilizing Transformer in the graph is the necessity of effectively encoding the structural information of a graph into the model.

DeeperGCN: All You Need to Train Deeper GCNs

dmlc/dgl 13 Jun 2020

Graph Convolutional Networks (GCNs) have been drawing significant attention with the power of representation learning on graphs.

Recipe for a General, Powerful, Scalable Graph Transformer

rampasek/GraphGPS 25 May 2022

We propose a recipe on how to build a general, powerful, scalable (GPS) graph Transformer with linear complexity and state-of-the-art results on a diverse set of benchmarks.

Global Self-Attention as a Replacement for Graph Convolution

shamim-hussain/egt_pytorch 7 Aug 2021

The resultant framework - which we call Edge-augmented Graph Transformer (EGT) - can directly accept, process and output structural information of arbitrary form, which is important for effective learning on graph-structured data.

Nested Graph Neural Networks

muhanzhang/nestedgnn NeurIPS 2021

The key is to make each node representation encode a subgraph around it more than a subtree.

Wasserstein Embedding for Graph Learning

navid-naderi/WEGL ICLR 2021

We present Wasserstein Embedding for Graph Learning (WEGL), a novel and fast framework for embedding entire graphs in a vector space, in which various machine learning models are applicable for graph-level prediction tasks.

Improving Graph Property Prediction with Generalized Readout Functions

EricAlcaide/generalized-readout-phase 21 Sep 2020

Graph property prediction is drawing increasing attention in the recent years due to the fact that graphs are one of the most general data structures since they can contain an arbitrary number of nodes and connections between them, and it is the backbone for many different tasks like classification and regression on such kind of data (networks, molecules, knowledge bases, ...).

Graph convolutions that can finally model local structure

RBrossard/GINEPLUS 30 Nov 2020

Despite quick progress in the last few years, recent studies have shown that modern graph neural networks can still fail at very simple tasks, like detecting small cycles.