Hyperparameter-free and Explainable Whole Graph Embedding

4 Aug 2021  ·  Hao Wang, Yue Deng, Linyuan Lü, Guanrong Chen ·

Graphs can be used to describe complex systems. Recently, whole graph embedding (graph representation learning) can compress a graph into a compact lower-dimension vector while preserving intrinsic properties, earning much attention. However, most graph embedding methods have problems such as tedious parameter tuning or poor explanation. This paper presents a simple and hyperparameter-free whole graph embedding method based on the DHC (Degree, H-index, and Coreness) theorem and Shannon Entropy (E), abbreviated as DHC-E. The DHC-E can provide a trade-off between simplicity and quality for supervised classification learning tasks involving molecular, social, and brain networks. Moreover, it performs well in lower-dimensional graph visualization. Overall, the DHC-E is simple, hyperparameter-free, and explainable for whole graph embedding with promising potential for exploring graph classification and lower-dimensional graph visualization.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here