no code implementations • 16 Mar 2024 • Andrew B. Kahng, Zhiang Wang
Global placement is a fundamental step in VLSI physical design.
no code implementations • 17 Dec 2023 • Andrew B. Kahng, Robert R. Nerem, Yusu Wang, Chien-Yi Yang
On the methodology front, we propose NN-Steiner, which is a novel mixed neural-algorithmic framework for computing RSMTs that leverages the celebrated PTAS algorithmic framework of Arora to solve this problem (and other geometric optimization problems).
1 code implementation • 23 Aug 2023 • Hadi Esmaeilzadeh, Soroush Ghodrati, Andrew B. Kahng, Joon Kyung Kim, Sean Kinzer, Sayak Kundu, Rohan Mahapatra, Susmita Dey Manasi, Sachin Sapatnekar, Zhiang Wang, Ziqing Zeng
Parameterizable machine learning (ML) accelerators are the product of recent breakthroughs in ML.
no code implementations • 29 Jun 2023 • Hadi Esmaeilzadeh, Soroush Ghodrati, Andrew B. Kahng, Sean Kinzer, Susmita Dey Manasi, Sachin S. Sapatnekar, Zhiang Wang
The modeling effort of SimDIT comprehensively covers convolution and non-convolution operations of both CNN inference and training on a highly parameterizable hardware substrate.
no code implementations • 11 May 2023 • Vidya A. Chhabria, Wenjing Jiang, Andrew B. Kahng, Sachin S. Sapatnekar
Inaccurate timing prediction wastes design effort, hurts circuit performance, and may lead to design failure.
no code implementations • 7 May 2023 • Ismail Bustany, Andrew B. Kahng, Ioannis Koutis, Bodhisatta Pramanik, Zhiang Wang
State-of-the-art hypergraph partitioners follow the multilevel paradigm that constructs multiple levels of progressively coarser hypergraphs that are used to drive cut refinement on each level of the hierarchy.
no code implementations • 23 Apr 2023 • Andrew B. Kahng, Ravi Varadarajan, Zhiang Wang
In a typical RTL to GDSII flow, floorplanning or macro placement is a critical step in achieving decent quality of results (QoR).
1 code implementation • 21 Feb 2023 • Chung-Kuan Cheng, Andrew B. Kahng, Sayak Kundu, Yucheng Wang, Zhiang Wang
We provide open, transparent implementation and assessment of Google Brain's deep reinforcement learning approach to macro placement and its Circuit Training (CT) implementation in GitHub.