ZigZag: A Memory-Centric Rapid DNN Accelerator Design Space Exploration Framework

22 Jul 2020  ·  Linyan Mei, Pouya Houshmand, Vikram Jain, Sebastian Giraldo, Marian Verhelst ·

Building efficient embedded deep learning systems requires a tight co-design between DNN algorithms, memory hierarchy, and dataflow. However, owing to the large degrees of freedom in the design space, finding an optimal solution through the implementation of individual design points becomes infeasible. Recently, several estimation frameworks for fast design space exploration (DSE) have emerged, yet they either suffer from long runtimes or a limited exploration space. This work introduces ZigZag, a memory-centric rapid DNN accelerator DSE framework which extends the DSE with uneven mapping opportunities, in which operands at shared memory levels are no longer bound to use the same memory levels for each loop index. For this, ZigZag uses a memory-centric nested-for-loop format as a uniform representation to integrate algorithm, accelerator, and algorithm-to-accelerator mapping, and consists of three key components: 1) a latency-enhanced analytical Hardware Cost Estimator, 2) a Temporal Mapping Generator that supports even/uneven scheduling on any type of memory hierarchy, and 3) an Architecture Generator that explores the whole memory hierarchy design space. Benchmarking experiments against existing frameworks, together with three case studies at different design abstraction levels show the strength of ZigZag. Up to 33% more energy-efficient solutions are found by introducing ZigZag's uneven scheduling opportunities.

PDF Abstract
No code implementations yet. Submit your code now

Categories


Distributed, Parallel, and Cluster Computing C.1.4; C.3; C.4

Datasets


  Add Datasets introduced or used in this paper