Search Results for author: Sairam Sundaresan

Found 14 papers, 1 papers with code

SimQ-NAS: Simultaneous Quantization Policy and Neural Architecture Search

no code implementations19 Dec 2023 Sharath Nittur Sridhar, Maciej Szankin, Fang Chen, Sairam Sundaresan, Anthony Sarah

In this paper, we demonstrate that by using multi-objective search algorithms paired with lightly trained predictors, we can efficiently search for both the sub-network architecture and the corresponding quantization policy and outperform their respective baselines across different performance objectives such as accuracy, model size, and latency.

Neural Architecture Search Quantization

InstaTune: Instantaneous Neural Architecture Search During Fine-Tuning

no code implementations29 Aug 2023 Sharath Nittur Sridhar, Souvik Kundu, Sairam Sundaresan, Maciej Szankin, Anthony Sarah

However, training super-networks from scratch can be extremely time consuming and compute intensive especially for large models that rely on a two-stage training process of pre-training and fine-tuning.

Neural Architecture Search

Sensi-BERT: Towards Sensitivity Driven Fine-Tuning for Parameter-Efficient BERT

no code implementations14 Jul 2023 Souvik Kundu, Sharath Nittur Sridhar, Maciej Szankin, Sairam Sundaresan

In this paper, we present Sensi-BERT, a sensitivity driven efficient fine-tuning of BERT models that can take an off-the-shelf pre-trained BERT model and yield highly parameter-efficient models for downstream tasks.

QNLI QQP +4

Sparse Mixture Once-for-all Adversarial Training for Efficient In-Situ Trade-Off Between Accuracy and Robustness of DNNs

no code implementations27 Dec 2022 Souvik Kundu, Sairam Sundaresan, Sharath Nittur Sridhar, Shunlin Lu, Han Tang, Peter A. Beerel

Existing deep neural networks (DNNs) that achieve state-of-the-art (SOTA) performance on both clean and adversarially-perturbed images rely on either activation or weight conditioned convolution operations.

Image Classification

A Hardware-Aware Framework for Accelerating Neural Architecture Search Across Modalities

no code implementations19 May 2022 Daniel Cummings, Anthony Sarah, Sharath Nittur Sridhar, Maciej Szankin, Juan Pablo Munoz, Sairam Sundaresan

Recent advances in Neural Architecture Search (NAS) such as one-shot NAS offer the ability to extract specialized hardware-aware sub-network configurations from a task-specific super-network.

Evolutionary Algorithms Image Classification +2

A Fast and Efficient Conditional Learning for Tunable Trade-Off between Accuracy and Robustness

no code implementations28 Mar 2022 Souvik Kundu, Sairam Sundaresan, Massoud Pedram, Peter A. Beerel

In this paper, we present a fast learnable once-for-all adversarial training (FLOAT) algorithm, which instead of the existing FiLM-based conditioning, presents a unique weight conditioned learning that requires no additional layer, thereby incurring no significant increase in parameter count, training time, or network latency compared to standard adversarial training.

Image Classification

A Hardware-Aware System for Accelerating Deep Neural Network Optimization

no code implementations25 Feb 2022 Anthony Sarah, Daniel Cummings, Sharath Nittur Sridhar, Sairam Sundaresan, Maciej Szankin, Tristan Webb, J. Pablo Munoz

These methods decouple the super-network training from the sub-network search and thus decrease the computational burden of specializing to different hardware platforms.

Bayesian Optimization Evolutionary Algorithms +1

TrimBERT: Tailoring BERT for Trade-offs

no code implementations24 Feb 2022 Sharath Nittur Sridhar, Anthony Sarah, Sairam Sundaresan

Models based on BERT have been extremely successful in solving a variety of natural language processing (NLP) tasks.

FLOAT: FAST LEARNABLE ONCE-FOR-ALL ADVERSARIAL TRAINING FOR TUNABLE TRADE-OFF BETWEEN ACCURACY AND ROBUSTNESS

no code implementations29 Sep 2021 Souvik Kundu, Peter Anthony Beerel, Sairam Sundaresan

In this paper, we present Fast Learnable Once-for-all Adversarial Training (FLOAT) which transforms the weight tensors without using extra layers, thereby incurring no significant increase in parameter count, training time, or network latency compared to a standard adversarial training.

Image Classification

AttentionLite: Towards Efficient Self-Attention Models for Vision

no code implementations21 Dec 2020 Souvik Kundu, Sairam Sundaresan

We propose a novel framework for producing a class of parameter and compute efficient models called AttentionLitesuitable for resource-constrained applications.

Knowledge Distillation

Attention-based Image Upsampling

no code implementations17 Dec 2020 Souvik Kundu, Hesham Mostafa, Sharath Nittur Sridhar, Sairam Sundaresan

Convolutional layers are an integral part of many deep neural network solutions in computer vision.

Image Classification Image Super-Resolution +2

Compact Scene Graphs for Layout Composition and Patch Retrieval

no code implementations19 Apr 2019 Subarna Tripathi, Sharath Nittur Sridhar, Sairam Sundaresan, Hanlin Tang

Structured representations such as scene graphs serve as an efficient and compact representation that can be used for downstream rendering or retrieval tasks.

Image Generation Retrieval

Cannot find the paper you are looking for? You can Submit a new open access paper.