November 03, 2021

Papers with Code Newsletter #19

👋 Welcome to the 19th issue of the Papers with Code newsletter. This week we cover:

  • a brief update of the latest developments in graph neural networks (GNNs),
  • a list of recent applications of GNNs,
  • top trending papers for October 2021 on Papers with Code,
  • ... and much more.

Progress in Graph Neural Networks 📄

Graph Neural networks (GNNs) are being widely adopted for diverse applications and domains. This is in part due to their effectiveness on complex data structures, improved performance and scalability, and availability of approaches. In this special edition of the newsletter we briefly review GNNs and their latest developments, progress, and applications.

Scaling up Graph Neural Networks 

In the VQ-GNN framework, each mini-batch message passing (left) is approximated by VQ codebook update (middle) and an approximated message passing (right). Figure source: Ding et al. (2021)

Most of the state-of-the-art graph neural networks (GNNs) typically involve graph convolutions realized by message passing between neighbours or beyond. Scaling GNNs to large graph-structured data typically requires sampling techniques that consider only a small subset of the messages. These are difficult to apply to GNNs that utilize many-hops-away or global context each layer. In addition, it makes performance unstable for different tasks and datasets. This constraint also doesn't help in speeding up model inference. Ding et al. (2021) recently proposed a framework, VQ-GNN, to scale up convolution-based GNNs by using vector quantization.

In comparison to sampling techniques, VQ-GNN preserves all messages passed to a mini-batch of nodes. This is done by learning and updating a small number of quantized referenced vectors of global node representations, using vector quantization within each GNN layer. This new framework helps to avoid the "neighbour explosion" problem all while not compromising performance. VQ-GNN demonstrates scalability and competitive performance on large-graph node classification and link prediction benchmarks.

🔗 Paper & Code | 📈 Results

Robustness of Graph Neural Networks

GPU memory consumption for a global attack using the proposed method and previous method. While both methods yield similar adversarial accuracy, the proposed method shows to be more memory efficient. Figure source: Geisler et al. (2021)

As covered in the previous section, graph neural networks still have room for improvement in terms of scalability. Performance of these models is still lacking but so is their robustness to adversarial perturbations. The majority of the relevant works study vulnerability of GNNs to adversarial attacks on small graphs. In contrast, Geisler et al. (2021) look at the problem of improving the robustness of GNNs at scale.

As GNNs are deployed in real-world applications, it becomes important to study their robustness at scale. To address this challenge, this work introduces attacks that are practical and memory efficient even when the number of parameters is quadratic in the number of nodes. A scalable defense is also proposed using an aggregation function, Soft Median, which is provably robust and effective at all scales. The defense can reduce the attack's success rate from around 90% to 1%. The proposed attacks and defense are evaluated with standard GNNs on graphs of up to 111 millions nodes. This is a step forward to assessing and improving the robustness of GNN systems for real-world applications.

🔗 Paper & Code

Nested Graph Neural Networks

Implementation of the NGNN framework. Figure source: Zhang and Li (2021)

GNNs obtain node representations by iteratively aggregating neighbouring node features to a center node. The node representation encodes a rooted subtree around the center node which are then pooled into a single whole-graph representation. The expressivity of these rooted subtrees is limited for representing a non-tree graph. To address this, Zhang and Li (2021) propose a new framework, Nested Graph Neural Networks (NGNNs), that tackles this issue and represents a graph with rooted subgraphs instead of subtrees.

This approach improves expressivity and representations by making each node representation encode a subgraph around it more than a subtree. As shown in the figure above, NGNN extracts a local subgraph around each node, and then applies a base GNN with a pooling layer to each rooted subgraph to learn a subgraph representation. The subgraph representation is used as the root node's final representation. The final node representations are then applied a graph pooling layer to obtain a whole-graph representation. NGNNs can be combined with various well-established GNNs, in a plug-and-play manner, and achieves competitive results on several benchmark datasets.

🔗 Paper & Code | 📈 Results

Recent Applications of Graph Neural Networks

The following list of papers consists of some other recent developments and applications using GNNs: 

📄 GNNs for improving propositional satisfiability solving - Wang et al. (2021)

🟡 GNNs for imbalanced node classificationWang et al. (2021)

🚗 GNNs for transportation scenario planning - Peregrino et al. (2021)

🏆 GNNs for collaborative filtering - Chen et al. (2020)

🧬 GNNs for protein interface prediction - Fout et al. (2017)

🚦 GNNs for traffic forecasting - Zhang et al. (2021)

💠 GNNs for learning graph cellular automata - Grattarola et al. (2021)

🦠 GNNs for multiwave COVID-19 prediction - Xue et al. (2021)

🧠 GNNs for multimodal neuroimaging fusion learning - Shi et al. (2021)

🔐 GNNs for vulnerability detection - Nguyen et al. (2021)


For a full collection of graph neural networks, associated papers, applications, datasets, and open-source code, check out the Graph Model section.  


Trending on Papers with Code 📈

Top 10 Trending Papers of October 2021 🏆

Here are the top ten trending papers of October 2021 on Papers with Code:

📄 ResNet strikes back!

📄 Non-deep Networks

📄 8-bit Optimizers via Block-wise Quantization

📄 MobileViT

📄 CIPS-3D: A style-based, 3D-aware image generator

📄 FlexMatch: Boosting Semi-Supervised Learning with Curriculum Pseudo Labeling

📄 ADOP: Approximate Differentiable One-Pixel Point Rendering

📄 Parameter Prediction for Unseen Deep Architectures

📄 SaLinA: Sequential Learning of Agents

📄 A few more examples may be worth billions of parameters 


Trending Datasets

PubTable-1M - a large-scale dataset consisting of one million tables from PubMed Central Open Access scientific articles.

MedMNIST v2 - a large-scale MNIST-like benchmark dataset for biomedical image classification.

SCICAP - a new image captioning dataset that contains real-world scientific figures and captions.

Ego4D - is a massive-scale egocentric video dataset and benchmark suite. 

Trending Libraries & Tools

MiniHack - a powerful sandbox framework for easily designing novel RL environments. 

SCENIC - A JAX library for computer vision research with the goal to facilitate rapid experimentation, prototyping, and research of new architectures and models.

Community Highlights ✍️

We would like to thank:

  • @JiahengWei for contributions to leaderboards including several benchmark results for the CIFAR-100N dataset.
  • @kevalmorabia97 for several contributions, including a new dataset for webpage information extraction.
  • @danielegrattarola for several contributions to Methods, including addition of graph models like MinCutPool
  • @dddmeng for indexing the VerSe dataset used for the vertebrae segmentation challenge.
  • @EvgeniiZh and @MinghuiChen for several contributions to leaderboards

Special thanks to all our contributors for their ongoing contributions to Papers with Code.

---

See previous issues

Join us on Slack, LinkedIn, and Twitter