1 code implementation • 6 Jan 2024 • Yujin Choi, Jinseong Park, Hoki Kim, Jaewook Lee, Saeroom Park
Diffusion models have shown their effectiveness in generation tasks by well-approximating the underlying probability distribution.
1 code implementation • 9 Jun 2023 • Jinseong Park, Hoki Kim, Yujin Choi, Jaewook Lee
Training deep learning models with differential privacy (DP) results in a degradation of performance.
no code implementations • 27 Jan 2023 • Hoki Kim, Jinseong Park, Yujin Choi, Woojin Lee, Jaewook Lee
Recently, Sharpness-Aware Minimization (SAM) has shown state-of-the-art performance by seeking flat minima.
no code implementations • 16 Jan 2023 • Hoki Kim, Jinseong Park, Yujin Choi, Jaewook Lee
Utilizing the qualitative theory of dynamical systems, we explain how SAM becomes stuck in the saddle point and then theoretically prove that the saddle point can become an attractor under SAM dynamics.
no code implementations • 18 Jun 2022 • Hoki Kim, Jinseong Park, Jaewook Lee
Adversarial attacks have verified the existence of the vulnerability of neural networks.
1 code implementation • NeurIPS 2021 • Sungyoon Lee, Woojin Lee, Jinseong Park, Jaewook Lee
We identify another key factor that influences the performance of certifiable training: \textit{smoothness of the loss landscape}.
no code implementations • 29 Sep 2021 • Sungyoon Lee, Jinseong Park, Jaewook Lee
The eigendecomposition provides a simple relation between the eigenvalues of the low-dimensional matrix and the impurity of the probability output.
2 code implementations • EMNLP 2021 • Boseop Kim, HyoungSeok Kim, Sang-Woo Lee, Gichang Lee, Donghyun Kwak, Dong Hyeon Jeon, Sunghyun Park, Sungju Kim, Seonhoon Kim, Dongpil Seo, Heungsub Lee, Minyoung Jeong, Sungjae Lee, Minsub Kim, Suk Hyun Ko, Seokhun Kim, Taeyong Park, Jinuk Kim, Soyoung Kang, Na-Hyeon Ryu, Kang Min Yoo, Minsuk Chang, Soobin Suh, Sookyo In, Jinseong Park, Kyungduk Kim, Hiun Kim, Jisu Jeong, Yong Goo Yeo, Donghoon Ham, Dongju Park, Min Young Lee, Jaewook Kang, Inho Kang, Jung-Woo Ha, WooMyoung Park, Nako Sung
GPT-3 shows remarkable in-context learning ability of large-scale language models (LMs) trained on hundreds of billion scale data.
no code implementations • 1 Jan 2021 • Sungyoon Lee, Woojin Lee, Jinseong Park, Jaewook Lee
Certifiable training minimizes an upper bound on the worst-case loss over the allowed perturbation, and thus the tightness of the upper bound is an important factor in building certifiably robust models.