Search Results for author: Sungmin Cha

Found 19 papers, 7 papers with code

Salience-Based Adaptive Masking: Revisiting Token Dynamics for Enhanced Pre-training

no code implementations12 Apr 2024 Hyesong Choi, Hyejin Park, Kwang Moo Yi, Sungmin Cha, Dongbo Min

In this paper, we introduce Saliency-Based Adaptive Masking (SBAM), a novel and cost-effective approach that significantly enhances the pre-training performance of Masked Image Modeling (MIM) approaches by prioritizing token salience.

Hyperparameters in Continual Learning: a Reality Check

no code implementations14 Mar 2024 Sungmin Cha, Kyunghyun Cho

In the Hyperparameter Tuning phase, each algorithm is iteratively trained with different hyperparameter values to find the optimal hyperparameter values.

Class Incremental Learning Incremental Learning

Sy-CON: Symmetric Contrastive Loss for Continual Self-Supervised Representation Learning

no code implementations8 Jun 2023 Sungmin Cha, Taesup Moon

We first argue that the conventional loss form of continual learning which consists of single task-specific loss (for plasticity) and a regularizer (for stability) may not be ideal for contrastive loss based CSSL that focus on representation learning.

Continual Learning Contrastive Learning +2

Learning to Unlearn: Instance-wise Unlearning for Pre-trained Classifiers

no code implementations27 Jan 2023 Sungmin Cha, Sungjun Cho, Dasol Hwang, Honglak Lee, Taesup Moon, Moontae Lee

Since the recent advent of regulations for data protection (e. g., the General Data Protection Regulation), there has been increasing demand in deleting information learned from sensitive data in pre-trained models without retraining from scratch.

Image Classification

Knowledge Unlearning for Mitigating Privacy Risks in Language Models

1 code implementation4 Oct 2022 Joel Jang, Dongkeun Yoon, Sohee Yang, Sungmin Cha, Moontae Lee, Lajanugen Logeswaran, Minjoon Seo

Pretrained Language Models (LMs) memorize a vast amount of knowledge during initial pretraining, including information that may violate the privacy of personal lives and identities.

Ranked #3 on Language Modelling on The Pile (Test perplexity metric)

Language Modelling

Towards More Objective Evaluation of Class Incremental Learning: Representation Learning Perspective

no code implementations16 Jun 2022 Sungmin Cha, Jihwan Kwak, Dongsub Shim, Hyunwoo Kim, Moontae Lee, Honglak Lee, Taesup Moon

While the common method for evaluating CIL algorithms is based on average test accuracy for all learned classes, we argue that maximizing accuracy alone does not necessarily lead to effective CIL algorithms.

Class Incremental Learning Incremental Learning +2

Rebalancing Batch Normalization for Exemplar-based Class-Incremental Learning

no code implementations CVPR 2023 Sungmin Cha, Sungjun Cho, Dasol Hwang, Sunwon Hong, Moontae Lee, Taesup Moon

The main reason for the ineffectiveness of their method lies in not fully addressing the data imbalance issue, especially in computing the gradients for learning the affine transformation parameters of BN.

Class Incremental Learning Incremental Learning

Supervised Neural Discrete Universal Denoiser for Adaptive Denoising

no code implementations24 Nov 2021 Sungmin Cha, Seonwoo Min, Sungroh Yoon, Taesup Moon

Namely, we make the supervised pre-training of Neural DUDE compatible with the adaptive fine-tuning of the parameters based on the given noisy data subject to denoising.

Denoising

Observations on K-image Expansion of Image-Mixing Augmentation for Classification

no code implementations8 Oct 2021 JoonHyun Jeong, Sungmin Cha, Youngjoon Yoo, Sangdoo Yun, Taesup Moon, Jongwon Choi

Image-mixing augmentations (e. g., Mixup and CutMix), which typically involve mixing two images, have become the de-facto training techniques for image classification.

Adversarial Robustness Classification +1

SSUL: Semantic Segmentation with Unknown Label for Exemplar-based Class-Incremental Learning

1 code implementation NeurIPS 2021 Sungmin Cha, Beomyoung Kim, Youngjoon Yoo, Taesup Moon

While the recent CISS algorithms utilize variants of the knowledge distillation (KD) technique to tackle the problem, they failed to fully address the critical challenges in CISS causing the catastrophic forgetting; the semantic drift of the background class and the multi-label prediction issue.

Continual Semantic Segmentation Disjoint 10-1 +12

FBI-Denoiser: Fast Blind Image Denoiser for Poisson-Gaussian Noise

1 code implementation CVPR 2021 Jaeseok Byun, Sungmin Cha, Taesup Moon

To that end, we propose Fast Blind Image Denoiser (FBI-Denoiser) for Poisson-Gaussian noise, which consists of two neural network models; 1) PGE-Net that estimates Poisson-Gaussian noise parameters 2000 times faster than the conventional methods and 2) FBI-Net that realizes a much more efficient BSN for pixelwise affine denoiser in terms of the number of parameters and inference speed.

Denoising

CPR: Classifier-Projection Regularization for Continual Learning

1 code implementation ICLR 2021 Sungmin Cha, Hsiang Hsu, Taebaek Hwang, Flavio P. Calmon, Taesup Moon

Inspired by both recent results on neural networks with wide local minima and information theory, CPR adds an additional regularization term that maximizes the entropy of a classifier's output probability.

Continual Learning

Continual Learning with Node-Importance based Adaptive Group Sparse Regularization

no code implementations NeurIPS 2020 Sangwon Jung, Hongjoon Ahn, Sungmin Cha, Taesup Moon

We propose a novel regularization-based continual learning method, dubbed as Adaptive Group Sparsity based Continual Learning (AGS-CL), using two group sparsity-based penalties.

Continual Learning

Uncertainty-based Continual Learning with Adaptive Regularization

2 code implementations NeurIPS 2019 Hongjoon Ahn, Sungmin Cha, DongGyu Lee, Taesup Moon

We introduce a new neural network-based continual learning algorithm, dubbed as Uncertainty-regularized Continual Learning (UCL), which builds on traditional Bayesian online learning framework with variational inference.

Continual Learning Variational Inference

GAN2GAN: Generative Noise Learning for Blind Denoising with Single Noisy Images

1 code implementation ICLR 2021 Sungmin Cha, TaeEon Park, Byeongjoon Kim, Jongduk Baek, Taesup Moon

We tackle a challenging blind image denoising problem, in which only single distinct noisy images are available for training a denoiser, and no information about noise is known, except for it being zero-mean, additive, and independent of the clean image.

Image Denoising

DoPAMINE: Double-sided Masked CNN for Pixel Adaptive Multiplicative Noise Despeckling

no code implementations7 Feb 2019 Sunghwan Joo, Sungmin Cha, Taesup Moon

We propose DoPAMINE, a new neural network based multiplicative noise despeckling algorithm.

Denoising

Fully Convolutional Pixel Adaptive Image Denoiser

2 code implementations ICCV 2019 Sungmin Cha, Taesup Moon

We propose a new image denoising algorithm, dubbed as Fully Convolutional Adaptive Image DEnoiser (FC-AIDE), that can learn from an offline supervised training set with a fully convolutional neural network as well as adaptively fine-tune the supervised model for each given noisy image.

Image Denoising

Neural Affine Grayscale Image Denoising

no code implementations17 Sep 2017 Sungmin Cha, Taesup Moon

We propose a new grayscale image denoiser, dubbed as Neural Affine Image Denoiser (Neural AIDE), which utilizes neural network in a novel way.

Grayscale Image Denoising Image Denoising

Cannot find the paper you are looking for? You can Submit a new open access paper.