1 code implementation • 2 Apr 2024 • Minhyuk Seo, Hyunseo Koh, Wonje Jeung, Minjae Lee, San Kim, Hankook Lee, Sungjun Cho, Sungik Choi, Hyunwoo Kim, Jonghyun Choi
Online continual learning suffers from an underfitted solution due to insufficient training for prompt model update (e. g., single-epoch training).
no code implementations • NeurIPS 2023 • Sungik Choi, Hankook Lee, Honglak Lee, Moontae Lee
Based on our observation that diffusion models can \emph{project} any sample to an in-distribution sample with similar background information, we propose \emph{Projection Regret (PR)}, an efficient novelty detection method that mitigates the bias of non-semantic information.
1 code implementation • 6 Oct 2023 • Junoh Kang, Jinyoung Choi, Sungik Choi, Bohyung Han
We propose a novel diffusion-based image generation method called the observation-guided diffusion probabilistic model (OGDM), which effectively addresses the tradeoff between quality control and fast sampling.
no code implementations • 7 Jan 2023 • Byoungjip Kim, Sungik Choi, Dasol Hwang, Moontae Lee, Honglak Lee
Despite surprising performance on zero-shot transfer, pre-training a large-scale multimodal model is often prohibitive as it requires a huge amount of data and computing resources.
1 code implementation • 4 Nov 2022 • Dong Hoon Lee, Sungik Choi, Hyunwoo Kim, Sae-Young Chung
This paper proposes Mutual Information Regularized Assignment (MIRA), a pseudo-labeling algorithm for unsupervised representation learning inspired by information maximization.
no code implementations • ICLR 2020 • Sungik Choi, Sae-Young Chung
Conventional out-of-distribution (OOD) detection schemes based on variational autoencoder or Random Network Distillation (RND) have been observed to assign lower uncertainty to the OOD than the target distribution.
1 code implementation • ICLR 2018 • Su Young Lee, Sungik Choi, Sae-Young Chung
We propose Episodic Backward Update (EBU) - a novel deep reinforcement learning algorithm with a direct value propagation.