no code implementations • ICLR 2019 • Hojung Lee, Jong-Seok Lee
This paper proposes a novel approach to train deep neural networks by unlocking the layer-wise dependency of backpropagation training.
no code implementations • WS 2019 • Hyungtak Choi, Lohith Ravuru, Tomasz Dryja{\'n}ski, Sunghan Rye, Dong-Hyun Lee, Hojung Lee, Inchul Hwang
This paper describes our submission to the TL;DR challenge.
1 code implementation • 3 Feb 2021 • Hojung Lee, Cho-Jui Hsieh, Jong-Seok Lee
We show that the proposed approach successfully decouples the update process of the layer groups for both convolutional neural networks (CNNs) and recurrent neural networks (RNNs).
1 code implementation • 1 Apr 2021 • Hojung Lee, Jong-Seok Lee
This paper proposes a novel knowledge distillation-based learning method to improve the classification performance of convolutional neural networks (CNNs) without a pre-trained teacher network, called exit-ensemble distillation.