no code implementations • 27 Mar 2024 • Yuxiang Zhao, Zhuomin Chai, Xun Jiang, Yibo Lin, Runsheng Wang, Ru Huang
We are the first work to apply graph structure to deep-learning based dynamic IR drop prediction method.
no code implementations • 7 Aug 2023 • Zhixiong Di, Runzhe Tao, Lin Chen, Qiang Wu, Yibo Lin
With imbalanced distribution of packed and unpacked logic elements, we further propose techniques such as graph oversampling and mini-batch training for this imbalanced learning task in large circuit graphs.
no code implementations • 7 May 2023 • Yuxiang Zhao, Zhuomin Chai, Yibo Lin, Runsheng Wang, Ru Huang
Accurate early congestion prediction can prevent unpleasant surprises at the routing stage, playing a crucial character in assisting designers to iterate faster in VLSI design cycles.
no code implementations • 1 Aug 2022 • Zhuomin Chai, Yuxiang Zhao, Yibo Lin, Wei Liu, Runsheng Wang, Ru Huang
The electronic design automation (EDA) community has been actively exploring machine learning (ML) for very large-scale integrated computer-aided design (VLSI CAD).
no code implementations • 24 Mar 2022 • Bowen Wang, Guibao Shen, Dong Li, Jianye Hao, Wulong Liu, Yu Huang, HongZhong Wu, Yibo Lin, Guangyong Chen, Pheng Ann Heng
Precise congestion prediction from a placement solution plays a crucial role in circuit placement.
no code implementations • 28 Feb 2022 • Junchi Yan, Xianglong Lyu, Ruoyu Cheng, Yibo Lin
Placement and routing are two indispensable and challenging (NP-hard) tasks in modern chip design flows.
3 code implementations • 23 Apr 2020 • Tsung-Wei Huang, Dian-Lun Lin, Chun-Xun Lin, Yibo Lin
Taskflow introduces an expressive task graph programming model to assist developers in the implementation of parallel and heterogeneous decomposition strategies on a heterogeneous computing platform.
no code implementations • 26 Dec 2018 • Yibo Lin, Zhao Song, Lin F. Yang
In this paper, we provide provable guarantees on some hashing-based parameter reduction methods in neural nets.
2 code implementations • ICML 2018 • Jiong Zhang, Yibo Lin, Zhao Song, Inderjit S. Dhillon
In this paper we propose a simple recurrent architecture, the Fourier Recurrent Unit (FRU), that stabilizes the gradients that arise in its training while giving us stronger expressive power.