Search Results for author: Daeshik Kim

Found 5 papers, 0 papers with code

Maximizing Discrimination Capability of Knowledge Distillation with Energy Function

no code implementations24 Nov 2023 Seonghak Kim, Gyeongdo Ham, SuIn Lee, Donggon Jang, Daeshik Kim

To distill optimal knowledge by adjusting non-target class predictions, we apply a higher temperature to low energy samples to create smoother distributions and a lower temperature to high energy samples to achieve sharper distributions.

Data Augmentation Knowledge Distillation

Robustness-Reinforced Knowledge Distillation with Correlation Distance and Network Pruning

no code implementations23 Nov 2023 Seonghak Kim, Gyeongdo Ham, Yucheol Cho, Daeshik Kim

The improvement in the performance of efficient and lightweight models (i. e., the student model) is achieved through knowledge distillation (KD), which involves transferring knowledge from more complex models (i. e., the teacher model).

Data Augmentation Knowledge Distillation +1

Stochastic Quantized Activation: To prevent Overfitting in Fast Adversarial Training

no code implementations ICLR 2019 Wonjun Yoon, Jisuk Park, Daeshik Kim

Existing neural networks are vulnerable to "adversarial examples"---created by adding maliciously designed small perturbations in inputs to induce a misclassification by the networks.

Cannot find the paper you are looking for? You can Submit a new open access paper.