no code implementations • 30 Dec 2024 • Sungik Choi, Sungwoo Park, Jaehoon Lee, SeungHyun Kim, Stanley Jungkyu Choi, Moontae Lee
Specifically, by viewing the autoencoder of LDM as a downsampling-upsampling kernel, HFI measures the extent of aliasing, a distortion of high-frequency information that appears in the reconstructed image.
no code implementations • 27 Aug 2024 • Suhee Yoon, Sanghyu Yoon, Hankook Lee, Ye Seul Sim, Sungik Choi, Kyungeun Lee, Hye-Seung Cho, Woohyung Lim
Out-of-distribution (OOD) detection, which determines whether a given sample is part of the in-distribution (ID), has recently shown promising results through training with synthetic OOD datasets.
Out-of-Distribution Detection
Out of Distribution (OOD) Detection
no code implementations • 19 Aug 2024 • Jaehoon Lee, Hankook Lee, Sungik Choi, Sungjun Cho, Moontae Lee
When solving forecasting problems including multiple time-series features, existing approaches often fall into two extreme categories, depending on whether to utilize inter-feature information: univariate and complete-multivariate models.
1 code implementation • CVPR 2024 • Minhyuk Seo, Hyunseo Koh, Wonje Jeung, Minjae Lee, San Kim, Hankook Lee, Sungjun Cho, Sungik Choi, Hyunwoo Kim, Jonghyun Choi
Online continual learning suffers from an underfitted solution due to insufficient training for prompt model update (e. g., single-epoch training).
no code implementations • NeurIPS 2023 • Sungik Choi, Hankook Lee, Honglak Lee, Moontae Lee
Based on our observation that diffusion models can \emph{project} any sample to an in-distribution sample with similar background information, we propose \emph{Projection Regret (PR)}, an efficient novelty detection method that mitigates the bias of non-semantic information.
1 code implementation • CVPR 2024 • Junoh Kang, Jinyoung Choi, Sungik Choi, Bohyung Han
We propose a novel diffusion-based image generation method called the observation-guided diffusion probabilistic model (OGDM), which effectively addresses the tradeoff between quality control and fast sampling.
no code implementations • 7 Jan 2023 • Byoungjip Kim, Sungik Choi, Dasol Hwang, Moontae Lee, Honglak Lee
Despite surprising performance on zero-shot transfer, pre-training a large-scale multimodal model is often prohibitive as it requires a huge amount of data and computing resources.
1 code implementation • 4 Nov 2022 • Dong Hoon Lee, Sungik Choi, Hyunwoo Kim, Sae-Young Chung
This paper proposes Mutual Information Regularized Assignment (MIRA), a pseudo-labeling algorithm for unsupervised representation learning inspired by information maximization.
no code implementations • ICLR 2020 • Sungik Choi, Sae-Young Chung
Conventional out-of-distribution (OOD) detection schemes based on variational autoencoder or Random Network Distillation (RND) have been observed to assign lower uncertainty to the OOD than the target distribution.
1 code implementation • ICLR 2018 • Su Young Lee, Sungik Choi, Sae-Young Chung
We propose Episodic Backward Update (EBU) - a novel deep reinforcement learning algorithm with a direct value propagation.