Search Results for author: Zichao Yue

Found 3 papers, 2 papers with code

Polynormer: Polynomial-Expressive Graph Transformer in Linear Time

2 code implementations2 Mar 2024 Chenhui Deng, Zichao Yue, Zhiru Zhang

To enable the base model permutation equivariant, we integrate it with graph topology and node features separately, resulting in local and global equivariant attention models.

Node Classification

Less is More: Hop-Wise Graph Attention for Scalable and Generalizable Learning on Circuits

1 code implementation2 Mar 2024 Chenhui Deng, Zichao Yue, Cunxi Yu, Gokce Sarar, Ryan Carey, Rajeev Jain, Zhiru Zhang

In this work we propose HOGA, a novel attention-based model for learning circuit representations in a scalable and generalizable manner.

Graph Attention

Understanding the Potential of FPGA-Based Spatial Acceleration for Large Language Model Inference

no code implementations23 Dec 2023 Hongzheng Chen, Jiahao Zhang, Yixiao Du, Shaojie Xiang, Zichao Yue, Niansong Zhang, Yaohui Cai, Zhiru Zhang

Experimental results demonstrate our approach can achieve up to 13. 4x speedup when compared to previous FPGA-based accelerators for the BERT model.

Language Modelling Large Language Model

Cannot find the paper you are looking for? You can Submit a new open access paper.