no code implementations • 19 Feb 2024 • Souvik Kundu, Anthony Sarah, Vinay Joshi, Om J Omer, Sreenivas Subramoney
With the recent growth in demand for large-scale deep neural networks, compute in-memory (CiM) has come up as a prominent solution to alleviate bandwidth and on-chip interconnect bottlenecks that constrain Von-Neuman architectures.
no code implementations • 19 Dec 2023 • Sharath Nittur Sridhar, Maciej Szankin, Fang Chen, Sairam Sundaresan, Anthony Sarah
In this paper, we demonstrate that by using multi-objective search algorithms paired with lightly trained predictors, we can efficiently search for both the sub-network architecture and the corresponding quantization policy and outperform their respective baselines across different performance objectives such as accuracy, model size, and latency.
no code implementations • 29 Aug 2023 • Sharath Nittur Sridhar, Souvik Kundu, Sairam Sundaresan, Maciej Szankin, Anthony Sarah
However, training super-networks from scratch can be extremely time consuming and compute intensive especially for large models that rely on a two-stage training process of pre-training and fine-tuning.
no code implementations • 19 May 2022 • Daniel Cummings, Anthony Sarah, Sharath Nittur Sridhar, Maciej Szankin, Juan Pablo Munoz, Sairam Sundaresan
Recent advances in Neural Architecture Search (NAS) such as one-shot NAS offer the ability to extract specialized hardware-aware sub-network configurations from a task-specific super-network.
no code implementations • 25 Feb 2022 • Anthony Sarah, Daniel Cummings, Sharath Nittur Sridhar, Sairam Sundaresan, Maciej Szankin, Tristan Webb, J. Pablo Munoz
These methods decouple the super-network training from the sub-network search and thus decrease the computational burden of specializing to different hardware platforms.
no code implementations • 25 Feb 2022 • Daniel Cummings, Sharath Nittur Sridhar, Anthony Sarah, Maciej Szankin
Neural architecture search (NAS), the study of automating the discovery of optimal deep neural network architectures for tasks in domains such as computer vision and natural language processing, has seen rapid growth in the machine learning research community.
no code implementations • 24 Feb 2022 • Sharath Nittur Sridhar, Anthony Sarah, Sairam Sundaresan
Models based on BERT have been extremely successful in solving a variety of natural language processing (NLP) tasks.
no code implementations • 22 Dec 2020 • Sharath Nittur Sridhar, Anthony Sarah
In recent times, BERT-based models have been extremely successful in solving a variety of natural language processing (NLP) tasks such as reading comprehension, natural language inference, sentiment analysis, etc.