Search Results for author: Junghyun Lee

Found 10 papers, 5 papers with code

Querying Easily Flip-flopped Samples for Deep Active Learning

no code implementations18 Jan 2024 Seong Jin Cho, Gwangsu Kim, Junghyun Lee, Jinwoo Shin, Chang D. Yoo

Active learning is a machine learning paradigm that aims to improve the performance of a model by strategically selecting and querying unlabeled data.

Active Learning

Large Catapults in Momentum Gradient Descent with Warmup: An Empirical Study

no code implementations25 Nov 2023 Prin Phunyaphibarn, Junghyun Lee, Bohan Wang, Huishuai Zhang, Chulhee Yun

Although gradient descent with momentum is widely used in modern deep learning, a concrete understanding of its effects on the training trajectory still remains elusive.

Fair Streaming Principal Component Analysis: Statistical and Algorithmic Viewpoint

1 code implementation NeurIPS 2023 Junghyun Lee, Hanseul Cho, Se-Young Yun, Chulhee Yun

Fair Principal Component Analysis (PCA) is a problem setting where we aim to perform PCA while making the resulting representation fair in that the projected distributions, conditional on the sensitive attributes, match one another.

Improved Regret Bounds of (Multinomial) Logistic Bandits via Regret-to-Confidence-Set Conversion

2 code implementations28 Oct 2023 Junghyun Lee, Se-Young Yun, Kwang-Sung Jun

Logistic bandit is a ubiquitous framework of modeling users' choices, e. g., click vs. no click for advertisement recommender system.

Recommendation Systems

Optimizing Layerwise Polynomial Approximation for Efficient Private Inference on Fully Homomorphic Encryption: A Dynamic Programming Approach

no code implementations16 Oct 2023 Junghyun Lee, Eunsang Lee, Young-Sik Kim, Yongwoo Lee, Joon-Woo Lee, Yongjune Kim, Jong-Seon No

Unlike the previous works approximating activation functions uniformly and conservatively, this paper presents a \emph{layerwise} degree optimization of activation functions to aggressively reduce the inference time while maintaining classification accuracy by taking into account the characteristics of each layer.

Privacy Preserving

Flooding with Absorption: An Efficient Protocol for Heterogeneous Bandits over Complex Networks

1 code implementation9 Mar 2023 Junghyun Lee, Laura Schmid, Se-Young Yun

Then, to mitigate the issue of high communication costs incurred by flooding in complex networks, we propose a new protocol called Flooding with Absorption (FwA).

Decision Making Multi-Armed Bandits +1

Nearly Optimal Latent State Decoding in Block MDPs

1 code implementation17 Aug 2022 Yassir Jedra, Junghyun Lee, Alexandre Proutière, Se-Young Yun

We investigate the problems of model estimation and reward-free learning in episodic Block MDPs.

Fast and Efficient MMD-based Fair PCA via Optimization over Stiefel Manifold

2 code implementations23 Sep 2021 Junghyun Lee, Gwangsu Kim, Matt Olfat, Mark Hasegawa-Johnson, Chang D. Yoo

This paper defines fair principal component analysis (PCA) as minimizing the maximum mean discrepancy (MMD) between dimensionality-reduced conditional distributions of different protected classes.

Fairness

Privacy-Preserving Machine Learning with Fully Homomorphic Encryption for Deep Neural Network

no code implementations14 Jun 2021 Joon-Woo Lee, HyungChul Kang, Yongwoo Lee, Woosuk Choi, Jieun Eom, Maxim Deryabin, Eunsang Lee, Junghyun Lee, Donghoon Yoo, Young-Sik Kim, Jong-Seon No

Previous PPML schemes replace non-arithmetic activation functions with simple arithmetic functions instead of adopting approximation methods and do not use bootstrapping, which enables continuous homomorphic evaluations.

BIG-bench Machine Learning Privacy Preserving

Cannot find the paper you are looking for? You can Submit a new open access paper.