no code implementations • 16 Feb 2024 • Jaewook Lee, Hanseul Cho, Chulhee Yun
The Gradient Descent-Ascent (GDA) algorithm, designed to solve minimax optimization problems, takes the descent and ascent steps either simultaneously (Sim-GDA) or alternately (Alt-GDA).
1 code implementation • 6 Jan 2024 • Yujin Choi, Jinseong Park, Hoki Kim, Jaewook Lee, Saeroom Park
Diffusion models have shown their effectiveness in generation tasks by well-approximating the underlying probability distribution.
1 code implementation • 17 Nov 2023 • Jaewook Lee, Seongmin Heo, Jay H. Lee
Accurately predicting lithium-ion batteries (LIBs) lifespan is pivotal for optimizing usage and preventing accidents.
no code implementations • 7 Aug 2023 • Hunter McNichols, Wanyong Feng, Jaewook Lee, Alexander Scarlatos, Digory Smith, Simon Woodhead, Andrew Lan
Multiple-choice questions (MCQs) are ubiquitous in almost all levels of education since they are easy to administer, grade, and are a reliable form of assessment.
no code implementations • 26 Jun 2023 • Dahyun Jung, Jaehyung Seo, Jaewook Lee, Chanjun Park, Heuiseok Lim
Generative commonsense reasoning refers to the task of generating acceptable and logical assumptions about everyday situations based on commonsense understanding.
1 code implementation • 9 Jun 2023 • Jinseong Park, Hoki Kim, Yujin Choi, Jaewook Lee
Training deep learning models with differential privacy (DP) results in a degradation of performance.
1 code implementation • 11 May 2023 • Nischal Ashok Kumar, Wanyong Feng, Jaewook Lee, Hunter McNichols, Aritra Ghosh, Andrew Lan
In this paper, we take a preliminary step towards solving the problem of causal discovery in knowledge tracing, i. e., finding the underlying causal relationship among different skills from real-world student response data.
no code implementations • 11 May 2023 • Jaewook Lee, Andrew Lan
Our approach, an end-to-end pipeline for auto-generating verbal and visual cues, can automatically generate highly memorable cues.
no code implementations • 27 Apr 2023 • Junyoung Byun, Yujin Choi, Jaewook Lee
This study aims to alleviate the trade-off between utility and privacy in the task of differentially private clustering.
no code implementations • 13 Mar 2023 • Jaeyoung Cha, Jaewook Lee, Chulhee Yun
We study convergence lower bounds of without-replacement stochastic gradient descent (SGD) for solving smooth (strongly-)convex finite-sum minimization problems.
no code implementations • 27 Jan 2023 • Hoki Kim, Jinseong Park, Yujin Choi, Woojin Lee, Jaewook Lee
Recently, Sharpness-Aware Minimization (SAM) has shown state-of-the-art performance by seeking flat minima.
no code implementations • 16 Jan 2023 • Hoki Kim, Jinseong Park, Yujin Choi, Jaewook Lee
Utilizing the qualitative theory of dynamical systems, we explain how SAM becomes stuck in the saddle point and then theoretically prove that the saddle point can become an attractor under SAM dynamics.
no code implementations • 5 Oct 2022 • Xia Su, Kaiming Cheng, Han Zhang, Jaewook Lee, Jon E. Froehlich
To help improve the safety and accessibility of indoor spaces, researchers and health professionals have created assessment instruments that enable homeowners and trained experts to audit and improve homes.
no code implementations • 18 Jun 2022 • Hoki Kim, Jinseong Park, Jaewook Lee
Adversarial attacks have verified the existence of the vulnerability of neural networks.
no code implementations • NeurIPS 2021 • Junyoung Byun, Woojin Lee, Jaewook Lee
However, current approaches on the training of encrypted machine learning have relied heavily on hyperparameter selection, which should be avoided owing to the extreme difficulty of conducting validation on encrypted data.
1 code implementation • NeurIPS 2021 • Sungyoon Lee, Woojin Lee, Jinseong Park, Jaewook Lee
We identify another key factor that influences the performance of certifiable training: \textit{smoothness of the loss landscape}.
no code implementations • 29 Sep 2021 • Sungyoon Lee, Jinseong Park, Jaewook Lee
The eigendecomposition provides a simple relation between the eigenvalues of the low-dimensional matrix and the impurity of the probability output.
no code implementations • 25 Aug 2021 • Hoki Kim, Woojin Lee, Sungyoon Lee, Jaewook Lee
Adversarial robustness is considered as a required property of deep neural networks.
no code implementations • 6 Jul 2021 • Sungyoon Lee, Hoki Kim, Jaewook Lee
Our experiments on MNIST, CIFAR10, and STL10 show that our proposed GradDiv regularizations improve the adversarial robustness of randomized neural networks against a variety of state-of-the-art attack methods.
no code implementations • 1 Jan 2021 • Sungyoon Lee, Woojin Lee, Jinseong Park, Jaewook Lee
Certifiable training minimizes an upper bound on the worst-case loss over the allowed perturbation, and thus the tightness of the upper bound is an important factor in building certifiably robust models.
1 code implementation • NeurIPS 2020 • Sungyoon Lee, Jaewook Lee, Saerom Park
Our certifiable training algorithm provides a tight propagated outer bound by introducing the box constraint propagation (BCP), and it efficiently computes the worst logit over the outer bound.
1 code implementation • 5 Oct 2020 • Hoki Kim, Woojin Lee, Jaewook Lee
Although fast adversarial training has demonstrated both robustness and efficiency, the problem of "catastrophic overfitting" has been observed.