Search Results for author: Sairam Sundaresan

Found 10 papers, 1 papers with code

A Hardware-Aware Framework for Accelerating Neural Architecture Search Across Modalities

no code implementations19 May 2022 Daniel Cummings, Anthony Sarah, Sharath Nittur Sridhar, Maciej Szankin, Juan Pablo Munoz, Sairam Sundaresan

Recent advances in Neural Architecture Search (NAS) such as one-shot NAS offer the ability to extract specialized hardware-aware sub-network configurations from a task-specific super-network.

Image Classification Machine Translation +1

A Fast and Efficient Conditional Learning for Tunable Trade-Off between Accuracy and Robustness

no code implementations28 Mar 2022 Souvik Kundu, Sairam Sundaresan, Massoud Pedram, Peter A. Beerel

In this paper, we present a fast learnable once-for-all adversarial training (FLOAT) algorithm, which instead of the existing FiLM-based conditioning, presents a unique weight conditioned learning that requires no additional layer, thereby incurring no significant increase in parameter count, training time, or network latency compared to standard adversarial training.

Image Classification

A Hardware-Aware System for Accelerating Deep Neural Network Optimization

no code implementations25 Feb 2022 Anthony Sarah, Daniel Cummings, Sharath Nittur Sridhar, Sairam Sundaresan, Maciej Szankin, Tristan Webb, J. Pablo Munoz

These methods decouple the super-network training from the sub-network search and thus decrease the computational burden of specializing to different hardware platforms.

Neural Architecture Search

TrimBERT: Tailoring BERT for Trade-offs

no code implementations24 Feb 2022 Sharath Nittur Sridhar, Anthony Sarah, Sairam Sundaresan

Models based on BERT have been extremely successful in solving a variety of natural language processing (NLP) tasks.

FLOAT: FAST LEARNABLE ONCE-FOR-ALL ADVERSARIAL TRAINING FOR TUNABLE TRADE-OFF BETWEEN ACCURACY AND ROBUSTNESS

no code implementations29 Sep 2021 Souvik Kundu, Peter Anthony Beerel, Sairam Sundaresan

In this paper, we present Fast Learnable Once-for-all Adversarial Training (FLOAT) which transforms the weight tensors without using extra layers, thereby incurring no significant increase in parameter count, training time, or network latency compared to a standard adversarial training.

Image Classification

AttentionLite: Towards Efficient Self-Attention Models for Vision

no code implementations21 Dec 2020 Souvik Kundu, Sairam Sundaresan

We propose a novel framework for producing a class of parameter and compute efficient models called AttentionLitesuitable for resource-constrained applications.

Knowledge Distillation

Attention-based Image Upsampling

no code implementations17 Dec 2020 Souvik Kundu, Hesham Mostafa, Sharath Nittur Sridhar, Sairam Sundaresan

Convolutional layers are an integral part of many deep neural network solutions in computer vision.

Image Classification Image Super-Resolution +3

Compact Scene Graphs for Layout Composition and Patch Retrieval

no code implementations19 Apr 2019 Subarna Tripathi, Sharath Nittur Sridhar, Sairam Sundaresan, Hanlin Tang

Structured representations such as scene graphs serve as an efficient and compact representation that can be used for downstream rendering or retrieval tasks.

Image Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.