Search Results for author: Sangkug Lym

Found 5 papers, 3 papers with code

Reducing Activation Recomputation in Large Transformer Models

3 code implementations10 May 2022 Vijay Korthikanti, Jared Casper, Sangkug Lym, Lawrence McAfee, Michael Andersch, Mohammad Shoeybi, Bryan Catanzaro

In this paper, we show how to significantly accelerate training of large transformer models by reducing activation recomputation.

FlexSA: Flexible Systolic Array Architecture for Efficient Pruned DNN Model Training

no code implementations27 Apr 2020 Sangkug Lym, Mattan Erez

Based on our evaluation, FlexSA with the proposed compilation heuristic improves compute resource utilization of pruning and training modern CNN models by 37% compared to a conventional training accelerator with a large systolic array.

PruneTrain: Fast Neural Network Training by Dynamic Sparse Model Reconfiguration

1 code implementation26 Jan 2019 Sangkug Lym, Esha Choukse, Siavash Zangeneh, Wei Wen, Sujay Sanghavi, Mattan Erez

State-of-the-art convolutional neural networks (CNNs) used in vision applications have large models with numerous weights.

Mini-batch Serialization: CNN Training with Inter-layer Data Reuse

1 code implementation30 Sep 2018 Sangkug Lym, Armand Behroozi, Wei Wen, Ge Li, Yongkee Kwon, Mattan Erez

Training convolutional neural networks (CNNs) requires intense computations and high memory bandwidth.

Cannot find the paper you are looking for? You can Submit a new open access paper.