no code implementations • 24 Feb 2019 • Hongjoon Ahn, Taesup Moon
We propose a novel iterative channel estimation (ICE) algorithm that essentially removes the critical known noisy channel assumption for universal discrete denoising problem.
no code implementations • ICCV 2021 • Hongjoon Ahn, Jihwan Kwak, Subin Lim, Hyeonsu Bang, Hyojun Kim, Taesup Moon
To that end, we analyze that computing the softmax probabilities by combining the output scores for all old and new classes could be the main cause of the bias.
no code implementations • NeurIPS 2020 • Sangwon Jung, Hongjoon Ahn, Sungmin Cha, Taesup Moon
We propose a novel regularization-based continual learning method, dubbed as Adaptive Group Sparsity based Continual Learning (AGS-CL), using two group sparsity-based penalties.
no code implementations • 8 Mar 2024 • Hongjoon Ahn, Jinu Hyeon, Youngmin Oh, Bosun Hwang, Taesup Moon
We argue that one of the main obstacles for developing effective Continual Reinforcement Learning (CRL) algorithms is the negative transfer issue occurring when the new task to learn arrives.
1 code implementation • 22 Jun 2022 • Hongjoon Ahn, Yongyi Yang, Quan Gan, Taesup Moon, David Wipf
Moreover, the complexity of this trade-off is compounded in the heterogeneous graph case due to the disparate heterophily relationships between nodes of different types.
2 code implementations • NeurIPS 2019 • Hongjoon Ahn, Sungmin Cha, DongGyu Lee, Taesup Moon
We introduce a new neural network-based continual learning algorithm, dubbed as Uncertainty-regularized Continual Learning (UCL), which builds on traditional Bayesian online learning framework with variational inference.
Ranked #11 on Continual Learning on ASC (19 tasks)