Search Results for author: Hyojun Go

Found 16 papers, 5 papers with code

Denoising Task Difficulty-based Curriculum for Training Diffusion Models

no code implementations15 Mar 2024 Jin-Young Kim, Hyojun Go, Soonwoo Kwon, Hyun-Gyoon Kim

By organizing timesteps or noise levels into clusters and training models with descending orders of difficulty, we facilitate an order-aware training regime, progressing from easier to harder denoising tasks, thereby deviating from the conventional approach of training diffusion models simultaneously across all timesteps.

Denoising Text-to-Image Generation

Switch Diffusion Transformer: Synergizing Denoising Tasks with Sparse Mixture-of-Experts

1 code implementation14 Mar 2024 Byeongjun Park, Hyojun Go, Jin-Young Kim, Sangmin Woo, Seokil Ham, Changick Kim

To achieve this, we employ a sparse mixture-of-experts within each transformer block to utilize semantic information and facilitate handling conflicts in tasks through parameter isolation.

Denoising Multi-Task Learning

HarmonyView: Harmonizing Consistency and Diversity in One-Image-to-3D

1 code implementation26 Dec 2023 Sangmin Woo, Byeongjun Park, Hyojun Go, Jin-Young Kim, Changick Kim

This work introduces HarmonyView, a simple yet effective diffusion sampling technique adept at decomposing two intricate aspects in single-image 3D generation: consistency and diversity.

3D Generation Image to 3D

Denoising Task Routing for Diffusion Models

2 code implementations11 Oct 2023 Byeongjun Park, Sangmin Woo, Hyojun Go, Jin-Young Kim, Changick Kim

Diffusion models generate highly realistic images by learning a multi-step denoising process, naturally embodying the principles of multi-task learning (MTL).

Denoising Multi-Task Learning

Multi-Architecture Multi-Expert Diffusion Models

no code implementations8 Jun 2023 Yunsung Lee, Jin-Young Kim, Hyojun Go, Myeongho Jeong, Shinhyeok Oh, Seungtaek Choi

In this paper, we address the performance degradation of efficient diffusion models by introducing Multi-architecturE Multi-Expert diffusion models (MEME).

Denoising Image Generation

ScoreCL: Augmentation-Adaptive Contrastive Learning via Score-Matching Function

no code implementations7 Jun 2023 Jin-Young Kim, Soonwoo Kwon, Hyojun Go, Yunsung Lee, Seungtaek Choi

Self-supervised contrastive learning (CL) has achieved state-of-the-art performance in representation learning by minimizing the distance between positive pairs while maximizing that of negative ones.

Contrastive Learning Representation Learning

Addressing Negative Transfer in Diffusion Models

1 code implementation NeurIPS 2023 Hyojun Go, Jinyoung Kim, Yunsung Lee, SeungHyun Lee, Shinhyeok Oh, Hyeongdon Moon, Seungtaek Choi

Through this, our approach addresses the issue of negative transfer in diffusion models by allowing for efficient computation of MTL methods.

Clustering Denoising +1

Towards Flexible Inductive Bias via Progressive Reparameterization Scheduling

no code implementations4 Oct 2022 Yunsung Lee, Gyuseong Lee, Kwangrok Ryoo, Hyojun Go, JiHye Park, Seungryong Kim

In addition, through Fourier analysis of feature maps, the model's response patterns according to signal frequency changes, we observe which inductive bias is advantageous for each data scale.

Inductive Bias Scheduling

Bridging Implicit and Explicit Geometric Transformation for Single-Image View Synthesis

no code implementations15 Sep 2022 Byeongjun Park, Hyojun Go, Changick Kim

Although recent methods generate high-quality novel views, synthesizing with only one explicit or implicit 3D geometry has a trade-off between two objectives that we call the "seesaw" problem: 1) preserving reprojected contents and 2) completing realistic out-of-view regions.

Geometrically Adaptive Dictionary Attack on Face Recognition

no code implementations8 Nov 2021 Junyoung Byun, Hyojun Go, Changick Kim

We apply the GADA strategy to two existing attack methods and show overwhelming performance improvement in the experiments on the LFW and CPLFW datasets.

3D Face Alignment Face Alignment +1

Residual-Guided Learning Representation for Self-Supervised Monocular Depth Estimation

no code implementations8 Nov 2021 Byeongjun Park, Taekyung Kim, Hyojun Go, Changick Kim

In this paper, we propose residual guidance loss that enables the depth estimation network to embed the discriminative feature by transferring the discriminability of auto-encoded features.

Monocular Depth Estimation Self-Supervised Learning

On the Effectiveness of Small Input Noise for Defending Against Query-based Black-Box Attacks

no code implementations13 Jan 2021 Junyoung Byun, Hyojun Go, Changick Kim

In this paper, we pay attention to an implicit assumption of query-based black-box adversarial attacks that the target model's output exactly corresponds to the query input.

Cannot find the paper you are looking for? You can Submit a new open access paper.