2 code implementations • 12 Sep 2022 • Paulius Micikevicius, Dusan Stosic, Neil Burgess, Marius Cornea, Pradeep Dubey, Richard Grisenthwaite, Sangwon Ha, Alexander Heinecke, Patrick Judd, John Kamalu, Naveen Mellempudi, Stuart Oberman, Mohammad Shoeybi, Michael Siu, Hao Wu
FP8 is a natural progression for accelerating deep learning training inference beyond the 16-bit formats common in modern processors.
1 code implementation • NeurIPS Workshop ICBINB 2021 • Guoxuan Xia, Sangwon Ha, Tiago Azevedo, Partha Maji
We show that this robustness can be partially explained by the calibration behavior of modern CNNs, and may be improved with overconfidence.
no code implementations • 13 May 2021 • Lorena Qendro, Sangwon Ha, René de Jong, Partha Maji
Quantized neural networks (NN) are the common standard to efficiently deploy deep learning models on tiny hardware platforms.
no code implementations • 12 Oct 2018 • Hyunsun Park, Jun Haeng Lee, Youngmin Oh, Sangwon Ha, Seungwon Lee
Energy and resource efficient training of DNNs will greatly extend the applications of deep learning.
no code implementations • ICLR 2019 • Jun Haeng Lee, Sangwon Ha, Saerom Choi, Won-Jo Lee, Seungwon Lee
This paper aims at rapid deployment of the state-of-the-art deep neural networks (DNNs) to energy efficient accelerators without time-consuming fine tuning or the availability of the full datasets.