no code implementations • 27 Jan 2023 • Sungmin Cha, Sungjun Cho, Dasol Hwang, Honglak Lee, Taesup Moon, Moontae Lee
Since the recent advent of regulations for data protection (e. g., the General Data Protection Regulation), there has been increasing demand in deleting information learned from sensitive data in pre-trained models without retraining from scratch.
1 code implementation • 4 Oct 2022 • Joel Jang, Dongkeun Yoon, Sohee Yang, Sungmin Cha, Moontae Lee, Lajanugen Logeswaran, Minjoon Seo
Pretrained Language Models (LMs) memorize a vast amount of knowledge during initial pretraining, including information that may violate the privacy of personal lives and identities.
no code implementations • 16 Jun 2022 • Sungmin Cha, Jihwan Kwak, Dongsub Shim, Hyunwoo Kim, Moontae Lee, Honglak Lee, Taesup Moon
While the common method for evaluating CIL algorithms is based on average test accuracy for all learned classes, we argue that maximizing accuracy alone does not necessarily lead to effective CIL algorithms.
no code implementations • 29 Jan 2022 • Sungmin Cha, Soonwon Hong, Moontae Lee, Taesup Moon
We first analyze that the empirical mean and variance obtained for normalization in a BN layer become highly biased toward the current task.
no code implementations • 24 Nov 2021 • Sungmin Cha, Seonwoo Min, Sungroh Yoon, Taesup Moon
Namely, we make the supervised pre-training of Neural DUDE compatible with the adaptive fine-tuning of the parameters based on the given noisy data subject to denoising.
no code implementations • 8 Oct 2021 • JoonHyun Jeong, Sungmin Cha, Youngjoon Yoo, Sangdoo Yun, Taesup Moon, Jongwon Choi
Image-mixing augmentations (e. g., Mixup and CutMix), which typically involve mixing two images, have become the de-facto training techniques for image classification.
1 code implementation • NeurIPS 2021 • Sungmin Cha, Beomyoung Kim, Youngjoon Yoo, Taesup Moon
While the recent CISS algorithms utilize variants of the knowledge distillation (KD) technique to tackle the problem, they failed to fully address the critical challenges in CISS causing the catastrophic forgetting; the semantic drift of the background class and the multi-label prediction issue.
Ranked #1 on
Disjoint 15-5
on PASCAL VOC 2012
no code implementations • ICML Workshop AML 2021 • Sungmin Cha, Naeun Ko, Youngjoon Yoo, Taesup Moon
We propose a novel and effective purification based adversarial defense method against pre-processor blind white- and black-box attacks.
1 code implementation • CVPR 2021 • Jaeseok Byun, Sungmin Cha, Taesup Moon
To that end, we propose Fast Blind Image Denoiser (FBI-Denoiser) for Poisson-Gaussian noise, which consists of two neural network models; 1) PGE-Net that estimates Poisson-Gaussian noise parameters 2000 times faster than the conventional methods and 2) FBI-Net that realizes a much more efficient BSN for pixelwise affine denoiser in terms of the number of parameters and inference speed.
1 code implementation • ICLR 2021 • Sungmin Cha, Hsiang Hsu, Taebaek Hwang, Flavio P. Calmon, Taesup Moon
Inspired by both recent results on neural networks with wide local minima and information theory, CPR adds an additional regularization term that maximizes the entropy of a classifier's output probability.
no code implementations • NeurIPS 2020 • Sangwon Jung, Hongjoon Ahn, Sungmin Cha, Taesup Moon
We propose a novel regularization-based continual learning method, dubbed as Adaptive Group Sparsity based Continual Learning (AGS-CL), using two group sparsity-based penalties.
1 code implementation • NeurIPS 2019 • Hongjoon Ahn, Sungmin Cha, DongGyu Lee, Taesup Moon
We introduce a new neural network-based continual learning algorithm, dubbed as Uncertainty-regularized Continual Learning (UCL), which builds on traditional Bayesian online learning framework with variational inference.
Ranked #11 on
Continual Learning
on ASC (19 tasks)
1 code implementation • ICLR 2021 • Sungmin Cha, TaeEon Park, Byeongjoon Kim, Jongduk Baek, Taesup Moon
We tackle a challenging blind image denoising problem, in which only single distinct noisy images are available for training a denoiser, and no information about noise is known, except for it being zero-mean, additive, and independent of the clean image.
no code implementations • 7 Feb 2019 • Sunghwan Joo, Sungmin Cha, Taesup Moon
We propose DoPAMINE, a new neural network based multiplicative noise despeckling algorithm.
2 code implementations • ICCV 2019 • Sungmin Cha, Taesup Moon
We propose a new image denoising algorithm, dubbed as Fully Convolutional Adaptive Image DEnoiser (FC-AIDE), that can learn from an offline supervised training set with a fully convolutional neural network as well as adaptively fine-tune the supervised model for each given noisy image.
no code implementations • 17 Sep 2017 • Sungmin Cha, Taesup Moon
We propose a new grayscale image denoiser, dubbed as Neural Affine Image Denoiser (Neural AIDE), which utilizes neural network in a novel way.