Search Results for author: Jinseong Park

Found 9 papers, 4 papers with code

Fair Sampling in Diffusion Models through Switching Mechanism

1 code implementation6 Jan 2024 Yujin Choi, Jinseong Park, Hoki Kim, Jaewook Lee, Saeroom Park

Diffusion models have shown their effectiveness in generation tasks by well-approximating the underlying probability distribution.

Attribute Fairness

Differentially Private Sharpness-Aware Training

1 code implementation9 Jun 2023 Jinseong Park, Hoki Kim, Yujin Choi, Jaewook Lee

Training deep learning models with differential privacy (DP) results in a degradation of performance.

Exploring the Effect of Multi-step Ascent in Sharpness-Aware Minimization

no code implementations27 Jan 2023 Hoki Kim, Jinseong Park, Yujin Choi, Woojin Lee, Jaewook Lee

Recently, Sharpness-Aware Minimization (SAM) has shown state-of-the-art performance by seeking flat minima.

Stability Analysis of Sharpness-Aware Minimization

no code implementations16 Jan 2023 Hoki Kim, Jinseong Park, Yujin Choi, Jaewook Lee

Utilizing the qualitative theory of dynamical systems, we explain how SAM becomes stuck in the saddle point and then theoretically prove that the saddle point can become an attractor under SAM dynamics.

Comment on Transferability and Input Transformation with Additive Noise

no code implementations18 Jun 2022 Hoki Kim, Jinseong Park, Jaewook Lee

Adversarial attacks have verified the existence of the vulnerability of neural networks.

Towards Better Understanding of Training Certifiably Robust Models against Adversarial Examples

1 code implementation NeurIPS 2021 Sungyoon Lee, Woojin Lee, Jinseong Park, Jaewook Lee

We identify another key factor that influences the performance of certifiable training: \textit{smoothness of the loss landscape}.

Implicit Jacobian regularization weighted with impurity of probability output

no code implementations29 Sep 2021 Sungyoon Lee, Jinseong Park, Jaewook Lee

The eigendecomposition provides a simple relation between the eigenvalues of the low-dimensional matrix and the impurity of the probability output.

Relation

Loss Landscape Matters: Training Certifiably Robust Models with Favorable Loss Landscape

no code implementations1 Jan 2021 Sungyoon Lee, Woojin Lee, Jinseong Park, Jaewook Lee

Certifiable training minimizes an upper bound on the worst-case loss over the allowed perturbation, and thus the tightness of the upper bound is an important factor in building certifiably robust models.

Cannot find the paper you are looking for? You can Submit a new open access paper.