no code implementations • 26 Mar 2024 • Huizhe Zhang, Jintang Li, Liang Chen, Zibin Zheng
However, the costs behind outstanding performances of GTs are higher energy consumption and computational overhead.
1 code implementation • 30 May 2023 • Jintang Li, Huizhe Zhang, Ruofan Wu, Zulun Zhu, Baokun Wang, Changhua Meng, Zibin Zheng, Liang Chen
While contrastive self-supervised learning has become the de-facto learning paradigm for graph neural networks, the pursuit of higher task accuracy requires a larger hidden dimensionality to learn informative and discriminative full-precision representations, raising concerns about computation, memory footprint, and energy consumption burden (largely overlooked) for real-world applications.