1 code implementation • 6 Jan 2024 • Yujin Choi, Jinseong Park, Hoki Kim, Jaewook Lee, Saeroom Park
Diffusion models have shown their effectiveness in generation tasks by well-approximating the underlying probability distribution.
1 code implementation • 9 Jun 2023 • Jinseong Park, Hoki Kim, Yujin Choi, Jaewook Lee
Training deep learning models with differential privacy (DP) results in a degradation of performance.
no code implementations • 27 Jan 2023 • Hoki Kim, Jinseong Park, Yujin Choi, Woojin Lee, Jaewook Lee
Recently, Sharpness-Aware Minimization (SAM) has shown state-of-the-art performance by seeking flat minima.
no code implementations • 16 Jan 2023 • Hoki Kim, Jinseong Park, Yujin Choi, Jaewook Lee
Utilizing the qualitative theory of dynamical systems, we explain how SAM becomes stuck in the saddle point and then theoretically prove that the saddle point can become an attractor under SAM dynamics.
no code implementations • 18 Jun 2022 • Hoki Kim, Jinseong Park, Jaewook Lee
Adversarial attacks have verified the existence of the vulnerability of neural networks.
no code implementations • 25 Aug 2021 • Hoki Kim, Woojin Lee, Sungyoon Lee, Jaewook Lee
Adversarial robustness is considered as a required property of deep neural networks.
no code implementations • 6 Jul 2021 • Sungyoon Lee, Hoki Kim, Jaewook Lee
Our experiments on MNIST, CIFAR10, and STL10 show that our proposed GradDiv regularizations improve the adversarial robustness of randomized neural networks against a variety of state-of-the-art attack methods.
1 code implementation • 5 Oct 2020 • Hoki Kim, Woojin Lee, Jaewook Lee
Although fast adversarial training has demonstrated both robustness and efficiency, the problem of "catastrophic overfitting" has been observed.
1 code implementation • 24 Sep 2020 • Hoki Kim
Torchattacks is a PyTorch library that contains adversarial attacks to generate adversarial examples and to verify the robustness of deep learning models.