Paper

From Spectrum Wavelet to Vertex Propagation: Graph Convolutional Networks Based on Taylor Approximation

Graph convolutional networks (GCN) have been recently utilized to extract the underlying structures of datasets with some labeled data and high-dimensional features. Existing GCNs mostly rely on a first-order Chebyshev approximation of graph wavelet-kernels. Such a generic propagation model does not always suit the various datasets and their features. This work revisits the fundamentals of graph wavelet and explores the utility of signal propagation in the vertex domain to approximate the spectral wavelet-kernels. We first derive the conditions for representing the graph wavelet-kernels via vertex propagation. We next propose alternative propagation models for GCN layers based on Taylor expansions. We further analyze the choices of detailed graph representations for TGCNs. Experiments on citation networks, multimedia datasets and synthetic graphs demonstrate the advantage of Taylor-based GCN (TGCN) in the node classification problems over the traditional GCN methods.

Results in Papers With Code
(↓ scroll down to see all results)