PolyScientist: Automatic Loop Transformations Combined with Microkernels for Optimization of Deep Learning Primitives

6 Feb 2020Sanket TavarageriAlexander HeineckeSasikanth AvanchaGagandeep GoyalRamakrishna UpadrastaBharat Kaul

At the heart of deep learning training and inferencing are computationally intensive primitives such as convolutions which form the building blocks of deep neural networks. Researchers have taken two distinct approaches to creating high performance implementations of deep learning kernels, namely, 1) library development exemplified by Intel MKL-DNN for CPUs, 2) automatic compilation represented by the TensorFlow XLA compiler... (read more)

PDF Abstract

Code


No code implementations yet. Submit your code now

Tasks


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.