Search Results for author: Hoki Kim

Found 9 papers, 4 papers with code

Fair Sampling in Diffusion Models through Switching Mechanism

1 code implementation6 Jan 2024 Yujin Choi, Jinseong Park, Hoki Kim, Jaewook Lee, Saeroom Park

Diffusion models have shown their effectiveness in generation tasks by well-approximating the underlying probability distribution.

Attribute Fairness

Differentially Private Sharpness-Aware Training

1 code implementation9 Jun 2023 Jinseong Park, Hoki Kim, Yujin Choi, Jaewook Lee

Training deep learning models with differential privacy (DP) results in a degradation of performance.

Exploring the Effect of Multi-step Ascent in Sharpness-Aware Minimization

no code implementations27 Jan 2023 Hoki Kim, Jinseong Park, Yujin Choi, Woojin Lee, Jaewook Lee

Recently, Sharpness-Aware Minimization (SAM) has shown state-of-the-art performance by seeking flat minima.

Stability Analysis of Sharpness-Aware Minimization

no code implementations16 Jan 2023 Hoki Kim, Jinseong Park, Yujin Choi, Jaewook Lee

Utilizing the qualitative theory of dynamical systems, we explain how SAM becomes stuck in the saddle point and then theoretically prove that the saddle point can become an attractor under SAM dynamics.

Comment on Transferability and Input Transformation with Additive Noise

no code implementations18 Jun 2022 Hoki Kim, Jinseong Park, Jaewook Lee

Adversarial attacks have verified the existence of the vulnerability of neural networks.

Bridged Adversarial Training

no code implementations25 Aug 2021 Hoki Kim, Woojin Lee, Sungyoon Lee, Jaewook Lee

Adversarial robustness is considered as a required property of deep neural networks.

Adversarial Robustness

GradDiv: Adversarial Robustness of Randomized Neural Networks via Gradient Diversity Regularization

no code implementations6 Jul 2021 Sungyoon Lee, Hoki Kim, Jaewook Lee

Our experiments on MNIST, CIFAR10, and STL10 show that our proposed GradDiv regularizations improve the adversarial robustness of randomized neural networks against a variety of state-of-the-art attack methods.

Adversarial Robustness

Understanding Catastrophic Overfitting in Single-step Adversarial Training

1 code implementation5 Oct 2020 Hoki Kim, Woojin Lee, Jaewook Lee

Although fast adversarial training has demonstrated both robustness and efficiency, the problem of "catastrophic overfitting" has been observed.

Torchattacks: A PyTorch Repository for Adversarial Attacks

1 code implementation24 Sep 2020 Hoki Kim

Torchattacks is a PyTorch library that contains adversarial attacks to generate adversarial examples and to verify the robustness of deep learning models.

Cannot find the paper you are looking for? You can Submit a new open access paper.