Search Results for author: Jin-Young Kim

Found 13 papers, 6 papers with code

Denoising Task Difficulty-based Curriculum for Training Diffusion Models

no code implementations15 Mar 2024 Jin-Young Kim, Hyojun Go, Soonwoo Kwon, Hyun-Gyoon Kim

By organizing timesteps or noise levels into clusters and training models with descending orders of difficulty, we facilitate an order-aware training regime, progressing from easier to harder denoising tasks, thereby deviating from the conventional approach of training diffusion models simultaneously across all timesteps.

Denoising Text-to-Image Generation

Switch Diffusion Transformer: Synergizing Denoising Tasks with Sparse Mixture-of-Experts

1 code implementation14 Mar 2024 Byeongjun Park, Hyojun Go, Jin-Young Kim, Sangmin Woo, Seokil Ham, Changick Kim

To achieve this, we employ a sparse mixture-of-experts within each transformer block to utilize semantic information and facilitate handling conflicts in tasks through parameter isolation.

Denoising Multi-Task Learning

HarmonyView: Harmonizing Consistency and Diversity in One-Image-to-3D

1 code implementation26 Dec 2023 Sangmin Woo, Byeongjun Park, Hyojun Go, Jin-Young Kim, Changick Kim

This work introduces HarmonyView, a simple yet effective diffusion sampling technique adept at decomposing two intricate aspects in single-image 3D generation: consistency and diversity.

3D Generation Image to 3D

Denoising Task Routing for Diffusion Models

2 code implementations11 Oct 2023 Byeongjun Park, Sangmin Woo, Hyojun Go, Jin-Young Kim, Changick Kim

Diffusion models generate highly realistic images by learning a multi-step denoising process, naturally embodying the principles of multi-task learning (MTL).

Denoising Multi-Task Learning

Addressing Selection Bias in Computerized Adaptive Testing: A User-Wise Aggregate Influence Function Approach

1 code implementation23 Aug 2023 Soonwoo Kwon, Sojung Kim, SeungHyun Lee, Jin-Young Kim, Suyeong An, Kyuseok Kim

Indeed, when naively training the diagnostic model using CAT response data, we observe that item profiles deviate significantly from the ground-truth.

Selection bias

Multi-Architecture Multi-Expert Diffusion Models

no code implementations8 Jun 2023 Yunsung Lee, Jin-Young Kim, Hyojun Go, Myeongho Jeong, Shinhyeok Oh, Seungtaek Choi

In this paper, we address the performance degradation of efficient diffusion models by introducing Multi-architecturE Multi-Expert diffusion models (MEME).

Denoising Image Generation

ScoreCL: Augmentation-Adaptive Contrastive Learning via Score-Matching Function

no code implementations7 Jun 2023 Jin-Young Kim, Soonwoo Kwon, Hyojun Go, Yunsung Lee, Seungtaek Choi

Self-supervised contrastive learning (CL) has achieved state-of-the-art performance in representation learning by minimizing the distance between positive pairs while maximizing that of negative ones.

Contrastive Learning Representation Learning

Deep Cerebellar Nuclei Segmentation via Semi-Supervised Deep Context-Aware Learning from 7T Diffusion MRI

no code implementations21 Apr 2020 Jin-Young Kim, Remi Patriat, Jordan Kaplan, Oren Solomon, Noam Harel

In this paper, we propose a novel deep learning framework (referred to as DCN-Net) for fast, accurate, and robust patient-specific segmentation of deep cerebellar dentate and interposed nuclei on 7T diffusion MRI.

Segmentation

Attention-Aware Linear Depthwise Convolution for Single Image Super-Resolution

no code implementations7 Aug 2019 Seongmin Hwang, Gwanghuyn Yu, Cheolkon Jung, Jin-Young Kim

Although deep convolutional neural networks (CNNs) have obtained outstanding performance in image superresolution (SR), their computational cost increases geometrically as CNN models get deeper and wider.

Image Super-Resolution

Learning Latent Semantic Representation from Pre-defined Generative Model

no code implementations ICLR 2019 Jin-Young Kim, Sung-Bae Cho

Unlike the conventional GAN models with hidden distribution of latent space, we define the distributions explicitly in advance that are trained to generate the data based on the corresponding features by inputting the latent variables that follow the distribution.

Cannot find the paper you are looking for? You can Submit a new open access paper.