Graph Embeddings

Laplacian Positional Encodings

Introduced by Dwivedi et al. in Benchmarking Graph Neural Networks

Laplacian eigenvectors represent a natural generalization of the Transformer positional encodings (PE) for graphs as the eigenvectors of a discrete line (NLP graph) are the cosine and sinusoidal functions. They help encode distance-aware information (i.e., nearby nodes have similar positional features and farther nodes have dissimilar positional features).

Hence, Laplacian Positional Encoding (PE) is a general method to encode node positions in a graph. For each node, its Laplacian PE is the k smallest non-trivial eigenvectors.

Source: Benchmarking Graph Neural Networks

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Node Classification 34 5.71%
Graph Learning 34 5.71%
Graph Representation Learning 24 4.03%
Graph Neural Network 22 3.70%
Prediction 18 3.03%
Link Prediction 17 2.86%
Graph Regression 17 2.86%
Graph Classification 17 2.86%
Graph Generation 12 2.02%

Components


Component Type
LapEigen
Graph Embeddings

Categories