no code implementations • 15 Mar 2024 • Jin-Young Kim, Hyojun Go, Soonwoo Kwon, Hyun-Gyoon Kim
By organizing timesteps or noise levels into clusters and training models with descending orders of difficulty, we facilitate an order-aware training regime, progressing from easier to harder denoising tasks, thereby deviating from the conventional approach of training diffusion models simultaneously across all timesteps.
1 code implementation • 14 Mar 2024 • Byeongjun Park, Hyojun Go, Jin-Young Kim, Sangmin Woo, Seokil Ham, Changick Kim
To achieve this, we employ a sparse mixture-of-experts within each transformer block to utilize semantic information and facilitate handling conflicts in tasks through parameter isolation.
1 code implementation • 26 Dec 2023 • Sangmin Woo, Byeongjun Park, Hyojun Go, Jin-Young Kim, Changick Kim
This work introduces HarmonyView, a simple yet effective diffusion sampling technique adept at decomposing two intricate aspects in single-image 3D generation: consistency and diversity.
2 code implementations • 11 Oct 2023 • Byeongjun Park, Sangmin Woo, Hyojun Go, Jin-Young Kim, Changick Kim
Diffusion models generate highly realistic images by learning a multi-step denoising process, naturally embodying the principles of multi-task learning (MTL).
1 code implementation • 23 Aug 2023 • Soonwoo Kwon, Sojung Kim, SeungHyun Lee, Jin-Young Kim, Suyeong An, Kyuseok Kim
Indeed, when naively training the diagnostic model using CAT response data, we observe that item profiles deviate significantly from the ground-truth.
no code implementations • 8 Jun 2023 • Yunsung Lee, Jin-Young Kim, Hyojun Go, Myeongho Jeong, Shinhyeok Oh, Seungtaek Choi
In this paper, we address the performance degradation of efficient diffusion models by introducing Multi-architecturE Multi-Expert diffusion models (MEME).
no code implementations • 7 Jun 2023 • Jin-Young Kim, Soonwoo Kwon, Hyojun Go, Yunsung Lee, Seungtaek Choi
Self-supervised contrastive learning (CL) has achieved state-of-the-art performance in representation learning by minimizing the distance between positive pairs while maximizing that of negative ones.
1 code implementation • CVPR 2023 • Hyojun Go, Yunsung Lee, Jin-Young Kim, SeungHyun Lee, Myeongho Jeong, Hyun Seung Lee, Seungtaek Choi
For that, the existing practice is to fine-tune the guidance models with labeled data corrupted with noises.
no code implementations • 21 Apr 2020 • Jin-Young Kim, Remi Patriat, Jordan Kaplan, Oren Solomon, Noam Harel
In this paper, we propose a novel deep learning framework (referred to as DCN-Net) for fast, accurate, and robust patient-specific segmentation of deep cerebellar dentate and interposed nuclei on 7T diffusion MRI.
no code implementations • 7 Aug 2019 • Seongmin Hwang, Gwanghuyn Yu, Cheolkon Jung, Jin-Young Kim
Although deep convolutional neural networks (CNNs) have obtained outstanding performance in image superresolution (SR), their computational cost increases geometrically as CNN models get deeper and wider.
1 code implementation • 26 Jun 2019 • Reuben R Shamir, Yuval Duchin, Jin-Young Kim, Guillermo Sapiro, Noam Harel
The DC and cDC for automatic STN segmentation were 0. 66 and 0. 80, respectively.
no code implementations • ICLR 2019 • Jin-Young Kim, Sung-Bae Cho
Unlike the conventional GAN models with hidden distribution of latent space, we define the distributions explicitly in advance that are trained to generate the data based on the corresponding features by inputting the latent variables that follow the distribution.
no code implementations • 31 Oct 2018 • Seongmin Hwang, Gwanghyun Yu, Huy Toan Nguyen, Nazeer Shahid, Doseong Sin, Jin-Young Kim, Seungyou Na
The proposed approach is tested for thermal images.