no code implementations • 24 Nov 2023 • Seonghak Kim, Gyeongdo Ham, SuIn Lee, Donggon Jang, Daeshik Kim
To distill optimal knowledge by adjusting non-target class predictions, we apply a higher temperature to low energy samples to create smoother distributions and a lower temperature to high energy samples to achieve sharper distributions.
no code implementations • 24 Nov 2023 • Gyeongdo Ham, Seonghak Kim, SuIn Lee, Jae-Hyeok Lee, Daeshik Kim
Furthermore, we propose a method called cosine similarity weighted temperature (CSWT) to improve the performance.
no code implementations • 23 Nov 2023 • Seonghak Kim, Gyeongdo Ham, Yucheol Cho, Daeshik Kim
The improvement in the performance of efficient and lightweight models (i. e., the student model) is achieved through knowledge distillation (KD), which involves transferring knowledge from more complex models (i. e., the teacher model).
no code implementations • 7 Jun 2021 • Goutam Bhat, Martin Danelljan, Radu Timofte, Kazutoshi Akita, Wooyeong Cho, Haoqiang Fan, Lanpeng Jia, Daeshik Kim, Bruno Lecouat, Youwei Li, Shuaicheng Liu, Ziluan Liu, Ziwei Luo, Takahiro Maeda, Julien Mairal, Christian Micheloni, Xuan Mo, Takeru Oba, Pavel Ostyakov, Jean Ponce, Sanghyeok Son, Jian Sun, Norimichi Ukita, Rao Muhammad Umer, Youliang Yan, Lei Yu, Magauiya Zhussip, Xueyi Zou
This paper reviews the NTIRE2021 challenge on burst super-resolution.
no code implementations • ICLR 2019 • Wonjun Yoon, Jisuk Park, Daeshik Kim
Existing neural networks are vulnerable to "adversarial examples"---created by adding maliciously designed small perturbations in inputs to induce a misclassification by the networks.