1 code implementation • 5 Mar 2024 • Haotian Lu, Sanmitra Banerjee, Jiaqi Gu
While off-chip noise-aware training and on-chip training have been proposed to enhance the variation tolerance of optical neural accelerators with moderate, static noise, we observe a notable performance degradation over time due to temporally drifting variations, which requires a real-time, in-situ calibration mechanism.
no code implementations • 31 Oct 2023 • Mingjie Liu, Teodor-Dumitru Ene, Robert Kirby, Chris Cheng, Nathaniel Pinckney, Rongjian Liang, Jonah Alben, Himyanshu Anand, Sanmitra Banerjee, Ismet Bayraktaroglu, Bonita Bhaskaran, Bryan Catanzaro, Arjun Chaudhuri, Sharon Clay, Bill Dally, Laura Dang, Parikshit Deshpande, Siddhanth Dhodhi, Sameer Halepete, Eric Hill, Jiashang Hu, Sumit Jain, Ankit Jindal, Brucek Khailany, George Kokai, Kishor Kunal, Xiaowei Li, Charley Lind, Hao liu, Stuart Oberman, Sujeet Omar, Ghasem Pasandi, Sreedhar Pratty, Jonathan Raiman, Ambar Sarkar, Zhengjiang Shao, Hanfei Sun, Pratik P Suthar, Varun Tej, Walker Turner, Kaizhe Xu, Haoxing Ren
ChipNeMo aims to explore the applications of large language models (LLMs) for industrial chip design.
no code implementations • 7 Aug 2023 • Amin Shafiee, Sanmitra Banerjee, Krishnendu Chakrabarty, Sudeep Pasricha, Mahdi Nikdast
The proposed models can be applied to any SP-NN architecture with different configurations to analyze the effect of loss and crosstalk.
no code implementations • 22 Jul 2022 • Sanmitra Banerjee, Mahdi Nikdast, Krishnendu Chakrabarty
Integrated photonic neural networks (IPNNs) are emerging as promising successors to conventional electronic AI accelerators as they offer substantial improvements in computing speed and energy efficiency.
no code implementations • 19 Apr 2022 • Asif Mirza, Amin Shafiee, Sanmitra Banerjee, Krishnendu Chakrabarty, Sudeep Pasricha, Mahdi Nikdast
Simulation results for two example SPNNs of different scales under realistic and correlated FPVs indicate that the optimized MZIs can improve the inferencing accuracy by up to 93. 95% for the MNIST handwritten digit dataset -- considered as an example in this paper -- which corresponds to a <0. 5% accuracy loss compared to the variation-free case.
no code implementations • 8 Apr 2022 • Amin Shafiee, Sanmitra Banerjee, Krishnendu Chakrabarty, Sudeep Pasricha, Mahdi Nikdast
Compared to electronic accelerators, integrated silicon-photonic neural networks (SP-NNs) promise higher speed and energy efficiency for emerging artificial-intelligence applications.
no code implementations • 14 Dec 2021 • Sanmitra Banerjee, Mahdi Nikdast, Sudeep Pasricha, Krishnendu Chakrabarty
Singular-value-decomposition-based coherent integrated photonic neural networks (SC-IPNNs) have a large footprint, suffer from high static power consumption for training and inference, and cannot be pruned using conventional DNN pruning techniques.
no code implementations • 11 Dec 2021 • Sanmitra Banerjee, Mahdi Nikdast, Sudeep Pasricha, Krishnendu Chakrabarty
We propose a novel hardware-aware magnitude pruning technique for coherent photonic neural networks.
no code implementations • 19 Dec 2020 • Sanmitra Banerjee, Mahdi Nikdast, Krishnendu Chakrabarty
Silicon-photonic neural networks (SPNNs) offer substantial improvements in computing speed and energy efficiency compared to their digital electronic counterparts.