no code implementations • 4 Oct 2024 • Xinnan Dai, Haohao Qu, Yifen Shen, Bohang Zhang, Qihao Wen, Wenqi Fan, Dongsheng Li, Jiliang Tang, Caihua Shan
The benchmark encompasses both synthetic and real datasets, and a variety of models, with a total of 11 tasks and 7 models.
1 code implementation • 6 Jun 2024 • Bohang Zhang, Lingxiao Zhao, Haggai Maron
On the other hand, we prove that EPNN itself is bounded by a recently proposed class of Subgraph GNNs, implying that all these spectral invariant architectures are strictly less expressive than 3-WL.
1 code implementation • 29 May 2024 • Xixi Wu, Yifei Shen, Caihua Shan, Kaitao Song, Siwei Wang, Bohang Zhang, Jiarui Feng, Hong Cheng, Wei Chen, Yun Xiong, Dongsheng Li
This theoretical insight led us to integrate GNNs with LLMs to enhance overall performance.
no code implementations • 21 Feb 2024 • Kai Yang, Jan Ackermann, Zhenyu He, Guhao Feng, Bohang Zhang, Yunzhen Feng, Qiwei Ye, Di He, LiWei Wang
Our results show that while these models are expressive enough to solve general DP tasks, contrary to expectations, they require a model size that scales with the problem size.
1 code implementation • 16 Jan 2024 • Bohang Zhang, Jingchu Gai, Yiheng Du, Qiwei Ye, Di He, LiWei Wang
Specifically, we identify a fundamental expressivity measure termed homomorphism expressivity, which quantifies the ability of GNN models to count graphs under homomorphism.
no code implementations • NeurIPS 2023 • Guhao Feng, Bohang Zhang, Yuntian Gu, Haotian Ye, Di He, LiWei Wang
By using circuit complexity theory, we first give impossibility results showing that bounded-depth Transformers are unable to directly produce correct answers for basic arithmetic/equation tasks unless the model size grows super-polynomially with respect to the input length.
1 code implementation • 14 Feb 2023 • Bohang Zhang, Guhao Feng, Yiheng Du, Di He, LiWei Wang
Recently, subgraph GNNs have emerged as an important direction for developing expressive graph neural networks (GNNs).
Ranked #1 on
Subgraph Counting - C6
on Synthetic Graph
1 code implementation • 23 Jan 2023 • Bohang Zhang, Shengjie Luo, LiWei Wang, Di He
In this paper, we take a fundamentally different perspective to study the expressive power of GNNs beyond the WL test.
1 code implementation • 4 Oct 2022 • Bohang Zhang, Du Jiang, Di He, LiWei Wang
Designing neural networks with bounded Lipschitz constant is a promising way to obtain certifiably robust classifiers against adversarial examples.
no code implementations • NeurIPS 2021 • Jikai Jin, Bohang Zhang, Haiyang Wang, LiWei Wang
Distributionally robust optimization (DRO) is a widely-used approach to learn models that are robust against distribution shift.
2 code implementations • ICLR 2022 • Bohang Zhang, Du Jiang, Di He, LiWei Wang
Recently, Zhang et al. (2021) developed a new neural network architecture based on $\ell_\infty$-distance functions, which naturally possesses certified $\ell_\infty$ robustness by its construction.
2 code implementations • 10 Feb 2021 • Bohang Zhang, Tianle Cai, Zhou Lu, Di He, LiWei Wang
This directly provides a rigorous guarantee of the certified robustness based on the margin of prediction outputs.
1 code implementation • NeurIPS 2020 • Bohang Zhang, Jikai Jin, Cong Fang, LiWei Wang
Gradient clipping is commonly used in training deep neural networks partly due to its practicability in relieving the exploding gradient problem.