no code implementations • 10 May 2021 • Keyulu Xu, Mozhi Zhang, Stefanie Jegelka, Kenji Kawaguchi
Our results show that the training of GNNs is implicitly accelerated by skip connections, more depth, and/or a good label distribution.
no code implementations • NeurIPS 2021 • Jingling Li, Mozhi Zhang, Keyulu Xu, John P. Dickerson, Jimmy Ba
Our framework measures a network's robustness via the predictive power in its representations -- the test performance of a linear model trained on the learned representations using a small set of clean labels.
1 code implementation • 28 Sep 2020 • Peiyuan Liao, Han Zhao, Keyulu Xu, Tommi Jaakkola, Geoffrey Gordon, Stefanie Jegelka, Ruslan Salakhutdinov
While the advent of Graph Neural Networks (GNNs) has greatly improved node and graph representation learning in many applications, the neighborhood aggregation scheme exposes additional vulnerabilities to adversaries seeking to extract node-level information about sensitive attributes.
3 code implementations • ICLR 2021 • Keyulu Xu, Mozhi Zhang, Jingling Li, Simon S. Du, Ken-ichi Kawarabayashi, Stefanie Jegelka
Second, in connection to analyzing the successes and limitations of GNNs, these results suggest a hypothesis for which we provide theoretical and empirical evidence: the success of GNNs in extrapolating algorithmic tasks to new data (e. g., larger graphs or edge weights) relies on encoding task-specific non-linearities in the architecture or features.
1 code implementation • 7 Sep 2020 • Tianle Cai, Shengjie Luo, Keyulu Xu, Di He, Tie-Yan Liu, Li-Wei Wang
We provide an explanation by showing that InstanceNorm serves as a preconditioner for GNNs, but such preconditioning effect is weaker with BatchNorm due to the heavy batch noise in graph datasets.
Ranked #24 on Graph Property Prediction on ogbg-molhiv
no code implementations • ACL 2019 • Mozhi Zhang, Keyulu Xu, Ken-ichi Kawarabayashi, Stefanie Jegelka, Jordan Boyd-Graber
Cross-lingual word embeddings (CLWE) underlie many multilingual natural language processing systems, often through orthogonal transformations of pre-trained monolingual embeddings.
1 code implementation • 4 Jun 2019 • Mozhi Zhang, Keyulu Xu, Ken-ichi Kawarabayashi, Stefanie Jegelka, Jordan Boyd-Graber
Cross-lingual word embeddings (CLWE) underlie many multilingual natural language processing systems, often through orthogonal transformations of pre-trained monolingual embeddings.
1 code implementation • NeurIPS 2019 • Simon S. Du, Kangcheng Hou, Barnabás Póczos, Ruslan Salakhutdinov, Ruosong Wang, Keyulu Xu
While graph kernels (GKs) are easy to train and enjoy provable theoretical guarantees, their practical performances are limited by their expressive power, as the kernel function often depends on hand-crafted combinatorial features of graphs.
2 code implementations • ICLR 2020 • Keyulu Xu, Jingling Li, Mozhi Zhang, Simon S. Du, Ken-ichi Kawarabayashi, Stefanie Jegelka
Neural networks have succeeded in many reasoning tasks.
19 code implementations • ICLR 2019 • Keyulu Xu, Weihua Hu, Jure Leskovec, Stefanie Jegelka
Here, we present a theoretical framework for analyzing the expressive power of GNNs to capture different graph structures.
Ranked #1 on Graph Classification on COX2
4 code implementations • ICML 2018 • Keyulu Xu, Chengtao Li, Yonglong Tian, Tomohiro Sonobe, Ken-ichi Kawarabayashi, Stefanie Jegelka
Furthermore, combining the JK framework with models like Graph Convolutional Networks, GraphSAGE and Graph Attention Networks consistently improves those models' performance.
Ranked #14 on Node Classification on PPI
1 code implementation • ICLR 2018 • Chengtao Li, David Alvarez-Melis, Keyulu Xu, Stefanie Jegelka, Suvrit Sra
We propose a framework for adversarial training that relies on a sample rather than a single sample point as the fundamental unit of discrimination.