Our framework avoids the "neighbor explosion" problem of GNNs using quantized representations combined with a low-rank version of the graph convolution matrix.
Ranked #11 on Node Classification on Reddit
We observe that in most cases, we need both a suitable domain generalization algorithm and a strong GNN backbone model to optimize out-of-distribution test performance.
We also empirically study the role of model overparameterization in GANs using several large-scale experiments on CIFAR-10 and Celeb-A datasets.
In this work, we present a comprehensive analysis of the importance of model over-parameterization in GANs both theoretically and empirically.
Data augmentation helps neural networks generalize better by enlarging the training set, but it remains an open question how to effectively augment graph data to enhance the performance of GNNs (Graph Neural Networks).
Ranked #1 on Graph Property Prediction on ogbg-ppa
GANs, however, are designed in a model-free fashion where no additional information about the underlying distribution is available.
It consists of two alternative transfer methods based on representation learning with auto-encoders: a passive approach using transductive principal component analysis and an active approach that uses a correlation alignment loss term.
A major challenge that has to be addressed when building such models is to design handcrafted features that are effective for the prediction task at hand.
We design a method to optimize the global mean first-passage time (GMFPT) of multiple random walkers searching in complex networks for a general target, without specifying the property of the target node.