no code implementations • 5 Feb 2024 • Matthew DeLorenzo, Animesh Basak Chowdhury, Vasudev Gohil, Shailja Thakur, Ramesh Karri, Siddharth Garg, Jeyavijayan Rajendran
Existing large language models (LLMs) for register transfer level code generation face challenges like compilation failures and suboptimal power, performance, and area (PPA) efficiency.
no code implementations • 22 Jan 2024 • Animesh Basak Chowdhury, Marco Romanelli, Benjamin Tan, Ramesh Karri, Siddharth Garg
Logic synthesis, a pivotal stage in chip design, entails optimizing chip specifications encoded in hardware description languages like Verilog into highly efficient implementations using Boolean logic gates.
no code implementations • 16 Oct 2023 • Animesh Basak Chowdhury, Shailja Thakur, Hammond Pearce, Ramesh Karri, Siddharth Garg
Here we describe our experience curating two large-scale, high-quality datasets for Verilog code generation and logic synthesis.
no code implementations • 22 May 2023 • Animesh Basak Chowdhury, Marco Romanelli, Benjamin Tan, Ramesh Karri, Siddharth Garg
%Compared to prior work, INVICTUS is the first solution that uses a mix of RL and search methods joint with an online out-of-distribution detector to generate synthesis recipes over a wide range of benchmarks.
no code implementations • 6 Mar 2023 • Animesh Basak Chowdhury, Lilas Alrahis, Luca Collini, Johann Knechtel, Ramesh Karri, Siddharth Garg, Ozgur Sinanoglu, Benjamin Tan
Oracle-less machine learning (ML) attacks have broken various logic locking schemes.
1 code implementation • 5 Apr 2022 • Animesh Basak Chowdhury, Benjamin Tan, Ryan Carey, Tushit Jain, Ramesh Karri, Siddharth Garg
Generating sub-optimal synthesis transformation sequences ("synthesis recipe") is an important problem in logic synthesis.
1 code implementation • 21 Oct 2021 • Animesh Basak Chowdhury, Benjamin Tan, Ramesh Karri, Siddharth Garg
Logic synthesis is a challenging and widely-researched combinatorial optimization problem during integrated circuit (IC) design.
no code implementations • ICML Workshop AML 2021 • Gauri Jagatap, Ameya Joshi, Animesh Basak Chowdhury, Siddharth Garg, Chinmay Hegde
In this paper we propose a new family of algorithms, ATENT, for training adversarially robust deep neural networks.