Search Results for author: Gwangsu Kim

Found 6 papers, 2 papers with code

Querying Easily Flip-flopped Samples for Deep Active Learning

no code implementations18 Jan 2024 Seong Jin Cho, Gwangsu Kim, Junghyun Lee, Jinwoo Shin, Chang D. Yoo

Active learning is a machine learning paradigm that aims to improve the performance of a model by strategically selecting and querying unlabeled data.

Active Learning

ESD: Expected Squared Difference as a Tuning-Free Trainable Calibration Measure

1 code implementation4 Mar 2023 Hee Suk Yoon, Joshua Tian Jin Tee, Eunseop Yoon, Sunjae Yoon, Gwangsu Kim, Yingzhen Li, Chang D. Yoo

Studies have shown that modern neural networks tend to be poorly calibrated due to over-confident predictions.

Deep Neural Network Based Accelerated Failure Time Models using Rank Loss

no code implementations13 Jun 2022 Gwangsu Kim, Sangwook Kang

An accelerated failure time (AFT) model assumes a log-linear relationship between failure times and a set of covariates.

Active Learning: Sampling in the Least Probable Disagreement Region

no code implementations29 Sep 2021 Seong Jin Cho, Gwangsu Kim, Chang D. Yoo

This strategy is valid only when the sample's "closeness" to the decision boundary can be estimated.

Active Learning valid

Fast and Efficient MMD-based Fair PCA via Optimization over Stiefel Manifold

2 code implementations23 Sep 2021 Junghyun Lee, Gwangsu Kim, Matt Olfat, Mark Hasegawa-Johnson, Chang D. Yoo

This paper defines fair principal component analysis (PCA) as minimizing the maximum mean discrepancy (MMD) between dimensionality-reduced conditional distributions of different protected classes.

Fairness

Least Probable Disagreement Region for Active Learning

no code implementations1 Jan 2021 Seong Jin Cho, Gwangsu Kim, Chang D. Yoo

Active learning strategy to query unlabeled samples nearer the estimated decision boundary at each step has been known to be effective when the distance from the sample data to the decision boundary can be explicitly evaluated; however, in numerous cases in machine learning, especially when it involves deep learning, conventional distance such as the $\ell_p$ from sample to decision boundary is not readily measurable.

Active Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.