no code implementations • 7 Apr 2024 • Jordan Dotzel, Yash Akhauri, Ahmed S. AbouElhamayed, Carly Jiang, Mohamed Abdelfattah, Zhiru Zhang
In this work, we explore the practicality of layer sparsity by profiling residual connections and establish the relationship between model depth and layer sparsity.
1 code implementation • 4 Mar 2024 • Yash Akhauri, Mohamed S. Abdelfattah
Building on our study, we present our predictor \textbf{FLAN}: \textbf{Fl}ow \textbf{A}ttention for \textbf{N}AS.
1 code implementation • 4 Mar 2024 • Yash Akhauri, Mohamed S. Abdelfattah
We then design a general latency predictor to comprehensively study (1) the predictor architecture, (2) NN sample selection methods, (3) hardware device representations, and (4) NN operation encoding schemes.
no code implementations • 4 Jun 2023 • Yash Akhauri, Mohamed S. Abdelfattah
Many hardware-aware neural architecture search (NAS) methods have been developed to optimize the topology of neural networks (NN) with the joint objectives of higher accuracy and lower latency.
no code implementations • 15 Sep 2022 • Yash Akhauri, J. Pablo Munoz, Nilesh Jain, Ravi Iyer
Our methodology efficiently discovers an interpretable and generalizable zero-cost proxy that gives state of the art score-accuracy correlation on all datasets and search spaces of NASBench-201 and Network Design Spaces (NDS).
no code implementations • NeurIPS Workshop AIPLANS 2021 • Yash Akhauri, Juan Pablo Munoz, Ravishankar Iyer, Nilesh Jain
Neural networks are becoming increasingly ubiquitous in a wide range of use cases.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +4
no code implementations • 17 Jun 2021 • Yash Akhauri, Adithya Niranjan, J. Pablo Muñoz, Suvadeep Banerjee, Abhijit Davare, Pasquale Cocchini, Anton A. Sorokin, Ravi Iyer, Nilesh Jain
The rapidly evolving field of Artificial Intelligence necessitates automated approaches to co-design neural network architecture and neural accelerators to maximize system efficiency and address productivity challenges.
no code implementations • 10 Apr 2020 • Yash Akhauri
We focus on how to design topologies that complement such a view of neurons, how to automate such a strategy of neural network design, and inference of such networks on Xilinx FPGAs.
no code implementations • 6 Apr 2020 • Yaman Umuroglu, Yash Akhauri, Nicholas J. Fraser, Michaela Blott
Deployment of deep neural networks for applications that require very high throughput or extremely low latency is a severe computational challenge, further exacerbated by inefficiencies in mapping the computation to hardware.
no code implementations • 26 May 2019 • Yash Akhauri
On-board processing elements on UAVs are currently inadequate for training and inference of Deep Neural Networks.