Search Results for author: Sangwon Jung

Found 6 papers, 2 papers with code

Dataset Condensation with Contrastive Signals

2 code implementations7 Feb 2022 Saehyung Lee, Sanghyuk Chun, Sangwon Jung, Sangdoo Yun, Sungroh Yoon

However, in this study, we prove that the existing DC methods can perform worse than the random selection method when task-irrelevant information forms a significant part of the training dataset.

Attribute Continual Learning +2

Learning Fair Classifiers with Partially Annotated Group Labels

1 code implementation CVPR 2022 Sangwon Jung, Sanghyuk Chun, Taesup Moon

To address this problem, we propose a simple Confidence-based Group Label assignment (CGL) strategy that is readily applicable to any fairness-aware learning method.

Fairness

Continual Learning with Node-Importance based Adaptive Group Sparse Regularization

no code implementations NeurIPS 2020 Sangwon Jung, Hongjoon Ahn, Sungmin Cha, Taesup Moon

We propose a novel regularization-based continual learning method, dubbed as Adaptive Group Sparsity based Continual Learning (AGS-CL), using two group sparsity-based penalties.

Continual Learning

Fair Feature Distillation for Visual Recognition

no code implementations CVPR 2021 Sangwon Jung, DongGyu Lee, TaeEon Park, Taesup Moon

Fairness is becoming an increasingly crucial issue for computer vision, especially in the human-related decision systems.

Fairness Knowledge Distillation

Re-weighting Based Group Fairness Regularization via Classwise Robust Optimization

no code implementations1 Mar 2023 Sangwon Jung, TaeEon Park, Sanghyuk Chun, Taesup Moon

Many existing group fairness-aware training methods aim to achieve the group fairness by either re-weighting underrepresented groups based on certain rules or using weakly approximated surrogates for the fairness metrics in the objective as regularization terms.

Fairness

Continual Learning in the Presence of Spurious Correlation

no code implementations21 Mar 2023 DongGyu Lee, Sangwon Jung, Taesup Moon

Specifically, we first show through two-task CL experiments that standard CL methods, which are unaware of dataset bias, can transfer biases from one task to another, both forward and backward, and this transfer is exacerbated depending on whether the CL methods focus on the stability or the plasticity.

Continual Learning Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.