Search Results for author: Angshuman Parashar

Found 9 papers, 3 papers with code

HighLight: Efficient and Flexible DNN Acceleration with Hierarchical Structured Sparsity

no code implementations22 May 2023 Yannan Nellie Wu, Po-An Tsai, Saurav Muralidharan, Angshuman Parashar, Vivienne Sze, Joel S. Emer

Due to complex interactions among various deep neural network (DNN) optimization techniques, modern DNNs can have weights and activations that are dense or sparse with diverse sparsity degrees.

Demystifying Map Space Exploration for NPUs

1 code implementation7 Oct 2022 Sheng-Chun Kao, Angshuman Parashar, Po-An Tsai, Tushar Krishna

Map Space Exploration is the problem of finding optimized mappings of a Deep Neural Network (DNN) model on an accelerator.

Navigate Neural Architecture Search

Sparseloop: An Analytical Approach To Sparse Tensor Accelerator Modeling

no code implementations12 May 2022 Yannan Nellie Wu, Po-An Tsai, Angshuman Parashar, Vivienne Sze, Joel S. Emer

This paper first presents a unified taxonomy to systematically describe the diverse sparse tensor accelerator design space.

DiGamma: Domain-aware Genetic Algorithm for HW-Mapping Co-optimization for DNN Accelerators

2 code implementations26 Jan 2022 Sheng-Chun Kao, Michael Pellauer, Angshuman Parashar, Tushar Krishna

The design of DNN accelerators includes two key parts: HW resource configuration and mapping strategy.

Union: A Unified HW-SW Co-Design Ecosystem in MLIR for Evaluating Tensor Operations on Spatial Accelerators

no code implementations15 Sep 2021 Geonhwa Jeong, Gokcen Kestor, Prasanth Chatarasi, Angshuman Parashar, Po-An Tsai, Sivasankaran Rajamanickam, Roberto Gioiosa, Tushar Krishna

The algorithms and accelerator cost models are connected via a novel mapping abstraction that captures the map space of spatial accelerators which can be systematically pruned based on constraints from the hardware, workload, and mapper.

Mind Mappings: Enabling Efficient Algorithm-Accelerator Mapping Space Search

1 code implementation2 Mar 2021 Kartik Hegde, Po-An Tsai, Sitao Huang, Vikas Chandra, Angshuman Parashar, Christopher W. Fletcher

The key idea is to derive a smooth, differentiable approximation to the otherwise non-smooth, non-convex search space.

Marvel: A Data-centric Compiler for DNN Operators on Spatial Accelerators

no code implementations18 Feb 2020 Prasanth Chatarasi, Hyoukjun Kwon, Natesh Raina, Saurabh Malik, Vaisakh Haridas, Angshuman Parashar, Michael Pellauer, Tushar Krishna, Vivek Sarkar

Searching for the optimal mappings is challenging because of the large space of mappings, and this challenge gets exacerbated with new operators and diverse accelerator configurations. To address this challenge, we propose a decoupled off-chip/on-chip approach that decomposes the mapping space into off-chip and on-chip subspaces, and first optimizes the off-chip subspace followed by the on-chip subspace.

Understanding Reuse, Performance, and Hardware Cost of DNN Dataflows: A Data-Centric Approach Using MAESTRO

no code implementations4 May 2018 Hyoukjun Kwon, Prasanth Chatarasi, Michael Pellauer, Angshuman Parashar, Vivek Sarkar, Tushar Krishna

The data partitioning and scheduling strategies used by DNN accelerators to leverage reuse and perform staging are known as dataflow, and they directly impact the performance and energy efficiency of DNN accelerator designs.

Scheduling valid

Cannot find the paper you are looking for? You can Submit a new open access paper.