Search Results for author: Yash Akhauri

Found 10 papers, 2 papers with code

Radial Networks: Dynamic Layer Routing for High-Performance Large Language Models

no code implementations7 Apr 2024 Jordan Dotzel, Yash Akhauri, Ahmed S. AbouElhamayed, Carly Jiang, Mohamed Abdelfattah, Zhiru Zhang

In this work, we explore the practicality of layer sparsity by profiling residual connections and establish the relationship between model depth and layer sparsity.

Encodings for Prediction-based Neural Architecture Search

1 code implementation4 Mar 2024 Yash Akhauri, Mohamed S. Abdelfattah

Building on our study, we present our predictor \textbf{FLAN}: \textbf{Fl}ow \textbf{A}ttention for \textbf{N}AS.

Neural Architecture Search Transfer Learning

On Latency Predictors for Neural Architecture Search

1 code implementation4 Mar 2024 Yash Akhauri, Mohamed S. Abdelfattah

We then design a general latency predictor to comprehensively study (1) the predictor architecture, (2) NN sample selection methods, (3) hardware device representations, and (4) NN operation encoding schemes.

Hardware Aware Neural Architecture Search Meta-Learning +2

Multi-Predict: Few Shot Predictors For Efficient Neural Architecture Search

no code implementations4 Jun 2023 Yash Akhauri, Mohamed S. Abdelfattah

Many hardware-aware neural architecture search (NAS) methods have been developed to optimize the topology of neural networks (NN) with the joint objectives of higher accuracy and lower latency.

Hardware Aware Neural Architecture Search Meta-Learning +1

EZNAS: Evolving Zero Cost Proxies For Neural Architecture Scoring

no code implementations15 Sep 2022 Yash Akhauri, J. Pablo Munoz, Nilesh Jain, Ravi Iyer

Our methodology efficiently discovers an interpretable and generalizable zero-cost proxy that gives state of the art score-accuracy correlation on all datasets and search spaces of NASBench-201 and Network Design Spaces (NDS).

Neural Architecture Search

RHNAS: Realizable Hardware and Neural Architecture Search

no code implementations17 Jun 2021 Yash Akhauri, Adithya Niranjan, J. Pablo Muñoz, Suvadeep Banerjee, Abhijit Davare, Pasquale Cocchini, Anton A. Sorokin, Ravi Iyer, Nilesh Jain

The rapidly evolving field of Artificial Intelligence necessitates automated approaches to co-design neural network architecture and neural accelerators to maximize system efficiency and address productivity challenges.

Neural Architecture Search

Exposing Hardware Building Blocks to Machine Learning Frameworks

no code implementations10 Apr 2020 Yash Akhauri

We focus on how to design topologies that complement such a view of neurons, how to automate such a strategy of neural network design, and inference of such networks on Xilinx FPGAs.

BIG-bench Machine Learning Quantization

LogicNets: Co-Designed Neural Networks and Circuits for Extreme-Throughput Applications

no code implementations6 Apr 2020 Yaman Umuroglu, Yash Akhauri, Nicholas J. Fraser, Michaela Blott

Deployment of deep neural networks for applications that require very high throughput or extremely low latency is a severe computational challenge, further exacerbated by inefficiencies in mapping the computation to hardware.

Network Intrusion Detection Quantization

HadaNets: Flexible Quantization Strategies for Neural Networks

no code implementations26 May 2019 Yash Akhauri

On-board processing elements on UAVs are currently inadequate for training and inference of Deep Neural Networks.

Model Compression Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.