2 code implementations • 2 Mar 2024 • Chenhui Deng, Zichao Yue, Zhiru Zhang
To enable the base model permutation equivariant, we integrate it with graph topology and node features separately, resulting in local and global equivariant attention models.
Ranked #1 on Node Classification on pokec
1 code implementation • 2 Mar 2024 • Chenhui Deng, Zichao Yue, Cunxi Yu, Gokce Sarar, Ryan Carey, Rajeev Jain, Zhiru Zhang
In this work we propose HOGA, a novel attention-based model for learning circuit representations in a scalable and generalizable manner.
no code implementations • 23 Dec 2023 • Hongzheng Chen, Jiahao Zhang, Yixiao Du, Shaojie Xiang, Zichao Yue, Niansong Zhang, Yaohui Cai, Zhiru Zhang
Experimental results demonstrate our approach can achieve up to 13. 4x speedup when compared to previous FPGA-based accelerators for the BERT model.