Search Results for author: SungHyun Baek

Found 2 papers, 1 papers with code

AI-KD: Adversarial learning and Implicit regularization for self-Knowledge Distillation

no code implementations20 Nov 2022 Hyungmin Kim, Sungho Suh, SungHyun Baek, Daehwan Kim, Daun Jeong, Hansang Cho, Junmo Kim

Our model not only distills the deterministic and progressive knowledge which are from the pre-trained and previous epoch predictive probabilities but also transfers the knowledge of the deterministic predictive distributions using adversarial learning.

Self-Knowledge Distillation

An empirical study of a pruning mechanism

1 code implementation1 Jan 2021 Minju Jung, Hyounguk Shon, Eojindl Yi, SungHyun Baek, Junmo Kim

For the pruning and retraining phase, whether the pruned-and-retrained network benefits from the pretrained network indded is examined.

Network Pruning

Cannot find the paper you are looking for? You can Submit a new open access paper.