Search Results for author: Sasikanth Avancha

Found 17 papers, 5 papers with code

DistGNN-MB: Distributed Large-Scale Graph Neural Network Training on x86 via Minibatch Sampling

no code implementations11 Nov 2022 Md Vasimuddin, Ramanarayan Mohanty, Sanchit Misra, Sasikanth Avancha

DistGNN-MB trains GraphSAGE and GAT 10x and 17. 2x faster, respectively, as compute nodes scale from 2 to 32.

DistGNN: Scalable Distributed Training for Large-Scale Graph Neural Networks

no code implementations14 Apr 2021 Vasimuddin Md, Sanchit Misra, Guixiang Ma, Ramanarayan Mohanty, Evangelos Georganas, Alexander Heinecke, Dhiraj Kalamkar, Nesreen K. Ahmed, Sasikanth Avancha

Full-batch training on Graph Neural Networks (GNN) to learn the structure of large graphs is a critical problem that needs to scale to hundreds of compute nodes to be feasible.

graph partitioning

Deep Graph Library Optimizations for Intel(R) x86 Architecture

1 code implementation13 Jul 2020 Sasikanth Avancha, Vasimuddin Md, Sanchit Misra, Ramanarayan Mohanty

The Deep Graph Library (DGL) was designed as a tool to enable structure learning from graphs, by supporting a core abstraction for graphs, including the popular Graph Neural Networks (GNN).

PolyScientist: Automatic Loop Transformations Combined with Microkernels for Optimization of Deep Learning Primitives

no code implementations6 Feb 2020 Sanket Tavarageri, Alexander Heinecke, Sasikanth Avancha, Gagandeep Goyal, Ramakrishna Upadrasta, Bharat Kaul

In this paper, we develop a hybrid solution to the development of deep learning kernels that achieves the best of both worlds: the expert coded microkernels are utilized for the innermost loops of kernels and we use the advanced polyhedral technology to automatically tune the outer loops for performance.

SEERL: Sample Efficient Ensemble Reinforcement Learning

no code implementations15 Jan 2020 Rohan Saphal, Balaraman Ravindran, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul

However, ensemble methods are relatively less popular in reinforcement learning owing to the high sample complexity and computational expense involved in obtaining a diverse ensemble.

Continuous Control Ensemble Learning +3

Anatomy Of High-Performance Deep Learning Convolutions On SIMD Architectures

2 code implementations16 Aug 2018 Evangelos Georganas, Sasikanth Avancha, Kunal Banerjee, Dhiraj Kalamkar, Greg Henry, Hans Pabst, Alexander Heinecke

Convolution layers are prevalent in many classes of deep neural networks, including Convolutional Neural Networks (CNNs) which provide state-of-the-art results for tasks like image recognition, neural machine translation and speech recognition.

Distributed, Parallel, and Cluster Computing

Hierarchical Block Sparse Neural Networks

no code implementations10 Aug 2018 Dharma Teja Vooturi, Dheevatsa Mudigere, Sasikanth Avancha

In this work, we jointly address both accuracy and performance of sparse DNNs using our proposed class of sparse neural networks called HBsNN (Hierarchical Block sparse Neural Networks).

On Scale-out Deep Learning Training for Cloud and HPC

no code implementations24 Jan 2018 Srinivas Sridharan, Karthikeyan Vaidyanathan, Dhiraj Kalamkar, Dipankar Das, Mikhail E. Smorkalov, Mikhail Shiryaev, Dheevatsa Mudigere, Naveen Mellempudi, Sasikanth Avancha, Bharat Kaul, Pradeep Dubey

The exponential growth in use of large deep neural networks has accelerated the need for training these deep neural networks in hours or even minutes.

Philosophy

RAIL: Risk-Averse Imitation Learning

1 code implementation20 Jul 2017 Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul

Generative Adversarial Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies when the expert's behavior is available as a fixed set of trajectories.

Autonomous Driving Continuous Control +1

Distributed Deep Learning Using Synchronous Stochastic Gradient Descent

no code implementations22 Feb 2016 Dipankar Das, Sasikanth Avancha, Dheevatsa Mudigere, Karthikeyan Vaidynathan, Srinivas Sridharan, Dhiraj Kalamkar, Bharat Kaul, Pradeep Dubey

We design and implement a distributed multinode synchronous SGD algorithm, without altering hyper parameters, or compressing data, or altering algorithmic behavior.

Cannot find the paper you are looking for? You can Submit a new open access paper.