no code implementations • 10 Jan 2024 • Farbin Fayza, Satyavolu Papa Rao, Darius Bunandar, Udit Gupta, Ajay Joshi
Our analysis shows that photonics can reduce both operational and embodied carbon footprints with its high energy efficiency and at least 4$\times$ less fabrication carbon cost per unit area than 28 nm CMOS.
no code implementations • 29 Nov 2023 • Farbin Fayza, Cansu Demirkiran, Hanning Chen, Che-Kai Liu, Avi Mohan, Hamza Errahmouni, Sanggeon Yun, Mohsen Imani, David Zhang, Darius Bunandar, Ajay Joshi
Over the past few years, silicon photonics-based computing has emerged as a promising alternative to CMOS-based computing for Deep Neural Networks (DNN).
no code implementations • 29 Nov 2023 • Cansu Demirkiran, Guowei Yang, Darius Bunandar, Ajay Joshi
Photonic computing is a compelling avenue for performing highly efficient matrix multiplication, a crucial operation in Deep Neural Networks (DNNs).
no code implementations • 28 Sep 2023 • Lakshmi Nair, David Widemann, Brad Turcott, Nick Moore, Alexandra Wleklinski, Darius Bunandar, Ioannis Papavasileiou, Shihu Wang, Eric Logan
Photonic computing promises faster and more energy-efficient deep neural network (DNN) inference than traditional digital hardware.
no code implementations • 19 Sep 2023 • Cansu Demirkiran, Lakshmi Nair, Darius Bunandar, Ajay Joshi
Our study demonstrates that analog accelerators utilizing the RNS-based approach can achieve ${\geq}99\%$ of FP32 accuracy for state-of-the-art DNN inference using data converters with only $6$-bit precision whereas a conventional analog core requires more than $8$-bit precision to achieve the same accuracy in the same DNNs.
1 code implementation • 7 Jul 2023 • Lakshmi Nair, Mikhail Bernadskiy, Arulselvan Madhavan, Craig Chan, Ayon Basumallik, Darius Bunandar
To supplement this ongoing effort, we propose INT-FP-QSim: an open-source simulator that enables flexible evaluation of LLMs and vision transformers at various numerical precisions and formats.
no code implementations • 15 Jun 2023 • Cansu Demirkiran, Rashmi Agrawal, Vijay Janapa Reddi, Darius Bunandar, Ajay Joshi
In addition, we show that RNS can reduce the energy consumption of the data converters within an analog accelerator by several orders of magnitude compared to a regular fixed-point approach.
no code implementations • 5 Jun 2023 • Lakshmi Nair, Darius Bunandar
Existing methods to recover model accuracy on analog-digital hardware in the presence of quantization and analog noise include noise-injection training.
no code implementations • 12 May 2022 • Ayon Basumallik, Darius Bunandar, Nicholas Dronen, Nicholas Harris, Ludmila Levkova, Calvin Mccarter, Lakshmi Nair, David Walter, David Widemann
Analog mixed-signal (AMS) devices promise faster, more energy-efficient deep neural network (DNN) inference than their digital counterparts.