Search Results for author: Jaewook Lee

Found 22 papers, 7 papers with code

Fundamental Benefit of Alternating Updates in Minimax Optimization

no code implementations16 Feb 2024 Jaewook Lee, Hanseul Cho, Chulhee Yun

The Gradient Descent-Ascent (GDA) algorithm, designed to solve minimax optimization problems, takes the descent and ascent steps either simultaneously (Sim-GDA) or alternately (Alt-GDA).

Fair Sampling in Diffusion Models through Switching Mechanism

1 code implementation6 Jan 2024 Yujin Choi, Jinseong Park, Hoki Kim, Jaewook Lee, Saeroom Park

Diffusion models have shown their effectiveness in generation tasks by well-approximating the underlying probability distribution.

Attribute Fairness

Attention Mechanism for Lithium-Ion Battery Lifespan Prediction: Temporal and Cyclic Attention

1 code implementation17 Nov 2023 Jaewook Lee, Seongmin Heo, Jay H. Lee

Accurately predicting lithium-ion batteries (LIBs) lifespan is pivotal for optimizing usage and preventing accidents.

Automated Distractor and Feedback Generation for Math Multiple-choice Questions via In-context Learning

no code implementations7 Aug 2023 Hunter McNichols, Wanyong Feng, Jaewook Lee, Alexander Scarlatos, Digory Smith, Simon Woodhead, Andrew Lan

Multiple-choice questions (MCQs) are ubiquitous in almost all levels of education since they are easy to administer, grade, and are a reliable form of assessment.

In-Context Learning Math +2

Knowledge Graph-Augmented Korean Generative Commonsense Reasoning

no code implementations26 Jun 2023 Dahyun Jung, Jaehyung Seo, Jaewook Lee, Chanjun Park, Heuiseok Lim

Generative commonsense reasoning refers to the task of generating acceptable and logical assumptions about everyday situations based on commonsense understanding.

Text Generation

Differentially Private Sharpness-Aware Training

1 code implementation9 Jun 2023 Jinseong Park, Hoki Kim, Yujin Choi, Jaewook Lee

Training deep learning models with differential privacy (DP) results in a degradation of performance.

A Conceptual Model for End-to-End Causal Discovery in Knowledge Tracing

1 code implementation11 May 2023 Nischal Ashok Kumar, Wanyong Feng, Jaewook Lee, Hunter McNichols, Aritra Ghosh, Andrew Lan

In this paper, we take a preliminary step towards solving the problem of causal discovery in knowledge tracing, i. e., finding the underlying causal relationship among different skills from real-world student response data.

Causal Discovery Knowledge Tracing

SmartPhone: Exploring Keyword Mnemonic with Auto-generated Verbal and Visual Cues

no code implementations11 May 2023 Jaewook Lee, Andrew Lan

Our approach, an end-to-end pipeline for auto-generating verbal and visual cues, can automatically generate highly memorable cues.

Retrieval Scheduling

Improving the Utility of Differentially Private Clustering through Dynamical Processing

no code implementations27 Apr 2023 Junyoung Byun, Yujin Choi, Jaewook Lee

This study aims to alleviate the trade-off between utility and privacy in the task of differentially private clustering.

Clustering

Tighter Lower Bounds for Shuffling SGD: Random Permutations and Beyond

no code implementations13 Mar 2023 Jaeyoung Cha, Jaewook Lee, Chulhee Yun

We study convergence lower bounds of without-replacement stochastic gradient descent (SGD) for solving smooth (strongly-)convex finite-sum minimization problems.

Exploring the Effect of Multi-step Ascent in Sharpness-Aware Minimization

no code implementations27 Jan 2023 Hoki Kim, Jinseong Park, Yujin Choi, Woojin Lee, Jaewook Lee

Recently, Sharpness-Aware Minimization (SAM) has shown state-of-the-art performance by seeking flat minima.

Stability Analysis of Sharpness-Aware Minimization

no code implementations16 Jan 2023 Hoki Kim, Jinseong Park, Yujin Choi, Jaewook Lee

Utilizing the qualitative theory of dynamical systems, we explain how SAM becomes stuck in the saddle point and then theoretically prove that the saddle point can become an attractor under SAM dynamics.

Towards Semi-automatic Detection and Localization of Indoor Accessibility Issues using Mobile Depth Scanning and Computer Vision

no code implementations5 Oct 2022 Xia Su, Kaiming Cheng, Han Zhang, Jaewook Lee, Jon E. Froehlich

To help improve the safety and accessibility of indoor spaces, researchers and health professionals have created assessment instruments that enable homeowners and trained experts to audit and improve homes.

Comment on Transferability and Input Transformation with Additive Noise

no code implementations18 Jun 2022 Hoki Kim, Jinseong Park, Jaewook Lee

Adversarial attacks have verified the existence of the vulnerability of neural networks.

Parameter-free HE-friendly Logistic Regression

no code implementations NeurIPS 2021 Junyoung Byun, Woojin Lee, Jaewook Lee

However, current approaches on the training of encrypted machine learning have relied heavily on hyperparameter selection, which should be avoided owing to the extreme difficulty of conducting validation on encrypted data.

BIG-bench Machine Learning Privacy Preserving +1

Towards Better Understanding of Training Certifiably Robust Models against Adversarial Examples

1 code implementation NeurIPS 2021 Sungyoon Lee, Woojin Lee, Jinseong Park, Jaewook Lee

We identify another key factor that influences the performance of certifiable training: \textit{smoothness of the loss landscape}.

Implicit Jacobian regularization weighted with impurity of probability output

no code implementations29 Sep 2021 Sungyoon Lee, Jinseong Park, Jaewook Lee

The eigendecomposition provides a simple relation between the eigenvalues of the low-dimensional matrix and the impurity of the probability output.

Relation

Bridged Adversarial Training

no code implementations25 Aug 2021 Hoki Kim, Woojin Lee, Sungyoon Lee, Jaewook Lee

Adversarial robustness is considered as a required property of deep neural networks.

Adversarial Robustness

GradDiv: Adversarial Robustness of Randomized Neural Networks via Gradient Diversity Regularization

no code implementations6 Jul 2021 Sungyoon Lee, Hoki Kim, Jaewook Lee

Our experiments on MNIST, CIFAR10, and STL10 show that our proposed GradDiv regularizations improve the adversarial robustness of randomized neural networks against a variety of state-of-the-art attack methods.

Adversarial Robustness

Loss Landscape Matters: Training Certifiably Robust Models with Favorable Loss Landscape

no code implementations1 Jan 2021 Sungyoon Lee, Woojin Lee, Jinseong Park, Jaewook Lee

Certifiable training minimizes an upper bound on the worst-case loss over the allowed perturbation, and thus the tightness of the upper bound is an important factor in building certifiably robust models.

Lipschitz-Certifiable Training with a Tight Outer Bound

1 code implementation NeurIPS 2020 Sungyoon Lee, Jaewook Lee, Saerom Park

Our certifiable training algorithm provides a tight propagated outer bound by introducing the box constraint propagation (BCP), and it efficiently computes the worst logit over the outer bound.

Understanding Catastrophic Overfitting in Single-step Adversarial Training

1 code implementation5 Oct 2020 Hoki Kim, Woojin Lee, Jaewook Lee

Although fast adversarial training has demonstrated both robustness and efficiency, the problem of "catastrophic overfitting" has been observed.

Cannot find the paper you are looking for? You can Submit a new open access paper.