Graph Regression
88 papers with code • 12 benchmarks • 17 datasets
The regression task is similar to graph classification but using different loss function and performance metric.
Libraries
Use these libraries to find Graph Regression models and implementationsDatasets
Latest papers
Substructure Aware Graph Neural Networks
Despite the great achievements of Graph Neural Networks (GNNs) in graph learning, conventional GNNs struggle to break through the upper limit of the expressiveness of first-order Weisfeiler-Leman graph isomorphism test algorithm (1-WL) due to the consistency of the propagation paradigm of GNNs with the 1-WL. Based on the fact that it is easier to distinguish the original graph through subgraphs, we propose a novel framework neural network framework called Substructure Aware Graph Neural Networks (SAGNN) to address these issues.
Path Neural Networks: Expressive and Accurate Graph Neural Networks
We derive three different variants of the PathNN model that aggregate single shortest paths, all shortest paths and all simple paths of length up to K. We prove that two of these variants are strictly more powerful than the 1-WL algorithm, and we experimentally validate our theoretical results.
CIN++: Enhancing Topological Message Passing
Our message passing scheme accounts for the aforementioned limitations by letting the cells to receive also lower messages within each layer.
Learning Probabilistic Symmetrization for Architecture Agnostic Equivariance
In contrary to equivariant architectures, we use an arbitrary base model such as an MLP or a transformer and symmetrize it to be equivariant to the given group by employing a small equivariant network that parameterizes the probabilistic distribution underlying the symmetrization.
Extending the Design Space of Graph Neural Networks by Rethinking Folklore Weisfeiler-Lehman
We theoretically prove that even if we fix the space complexity to $O(n^k)$ (for any $k\geq 2$) in $(k, t)$-FWL, we can construct an expressiveness hierarchy up to solving the graph isomorphism problem.
The Information Pathways Hypothesis: Transformers are Dynamic Self-Ensembles
However, the dynamic (i. e., input-dependent) nature of these pathways makes it difficult to prune dense self-attention during training.
Graph Inductive Biases in Transformers without Message Passing
Graph inductive biases are crucial for Graph Transformers, and previous works incorporate them using message-passing modules and/or positional encodings.
Semi-Supervised Graph Imbalanced Regression
The training data balance is achieved by (1) pseudo-labeling more graphs for under-represented labels with a novel regression confidence measurement and (2) augmenting graph examples in latent space for remaining rare labels after data balancing with pseudo-labels.
Graph Propagation Transformer for Graph Representation Learning
The core insight of our method is to fully consider the information propagation among nodes and edges in a graph when building the attention module in the transformer blocks.
DRew: Dynamically Rewired Message Passing with Delay
Message passing neural networks (MPNNs) have been shown to suffer from the phenomenon of over-squashing that causes poor performance for tasks relying on long-range interactions.