Search Results for author: Po-An Tsai

Found 6 papers, 2 papers with code

Abstracting Sparse DNN Acceleration via Structured Sparse Tensor Decomposition

no code implementations12 Mar 2024 Geonhwa Jeong, Po-An Tsai, Abhimanyu R. Bambhaniya, Stephen W. Keckler, Tushar Krishna

Next, we develop a software framework, TASDER, to accelerate DNNs by searching layer-wise, high-quality structured decomposition for both weight and activation tensors so that they can be accelerated by any systems with structured sparse hardware support.

Tensor Decomposition

HighLight: Efficient and Flexible DNN Acceleration with Hierarchical Structured Sparsity

no code implementations22 May 2023 Yannan Nellie Wu, Po-An Tsai, Saurav Muralidharan, Angshuman Parashar, Vivienne Sze, Joel S. Emer

Due to complex interactions among various deep neural network (DNN) optimization techniques, modern DNNs can have weights and activations that are dense or sparse with diverse sparsity degrees.

Demystifying Map Space Exploration for NPUs

1 code implementation7 Oct 2022 Sheng-Chun Kao, Angshuman Parashar, Po-An Tsai, Tushar Krishna

Map Space Exploration is the problem of finding optimized mappings of a Deep Neural Network (DNN) model on an accelerator.

Navigate Neural Architecture Search

Sparseloop: An Analytical Approach To Sparse Tensor Accelerator Modeling

no code implementations12 May 2022 Yannan Nellie Wu, Po-An Tsai, Angshuman Parashar, Vivienne Sze, Joel S. Emer

This paper first presents a unified taxonomy to systematically describe the diverse sparse tensor accelerator design space.

Union: A Unified HW-SW Co-Design Ecosystem in MLIR for Evaluating Tensor Operations on Spatial Accelerators

no code implementations15 Sep 2021 Geonhwa Jeong, Gokcen Kestor, Prasanth Chatarasi, Angshuman Parashar, Po-An Tsai, Sivasankaran Rajamanickam, Roberto Gioiosa, Tushar Krishna

The algorithms and accelerator cost models are connected via a novel mapping abstraction that captures the map space of spatial accelerators which can be systematically pruned based on constraints from the hardware, workload, and mapper.

Mind Mappings: Enabling Efficient Algorithm-Accelerator Mapping Space Search

1 code implementation2 Mar 2021 Kartik Hegde, Po-An Tsai, Sitao Huang, Vikas Chandra, Angshuman Parashar, Christopher W. Fletcher

The key idea is to derive a smooth, differentiable approximation to the otherwise non-smooth, non-convex search space.

Cannot find the paper you are looking for? You can Submit a new open access paper.