Search Results for author: Gyuseong Lee

Found 9 papers, 6 papers with code

Domain Generalization Using Large Pretrained Models with Mixture-of-Adapters

1 code implementation17 Oct 2023 Gyuseong Lee, Wooseok Jang, Jin Hyeon Kim, Jaewoo Jung, Seungryong Kim

By using both PEFT and MoA methods, we effectively alleviate the performance deterioration caused by distribution shifts and achieve state-of-the-art performance on diverse DG benchmarks.

Domain Generalization

Diffusion Model for Dense Matching

1 code implementation30 May 2023 Jisu Nam, Gyuseong Lee, Sunwoo Kim, Hyeonsu Kim, Hyoungwon Cho, Seyeon Kim, Seungryong Kim

The objective for establishing dense correspondence between paired images consists of two terms: a data term and a prior term.

Denoising

Towards Flexible Inductive Bias via Progressive Reparameterization Scheduling

no code implementations4 Oct 2022 Yunsung Lee, Gyuseong Lee, Kwangrok Ryoo, Hyojun Go, JiHye Park, Seungryong Kim

In addition, through Fourier analysis of feature maps, the model's response patterns according to signal frequency changes, we observe which inductive bias is advantageous for each data scale.

Inductive Bias Scheduling

Improving Sample Quality of Diffusion Models Using Self-Attention Guidance

4 code implementations ICCV 2023 Susung Hong, Gyuseong Lee, Wooseok Jang, Seungryong Kim

Denoising diffusion models (DDMs) have attracted attention for their exceptional generation quality and diversity.

Denoising Image Generation

MIDMs: Matching Interleaved Diffusion Models for Exemplar-based Image Translation

1 code implementation22 Sep 2022 Junyoung Seo, Gyuseong Lee, Seokju Cho, Jiyoung Lee, Seungryong Kim

Specifically, we formulate a diffusion-based matching-and-generation framework that interleaves cross-domain matching and diffusion steps in the latent space by iteratively feeding the intermediate warp into the noising process and denoising it to generate a translated image.

Denoising Translation

ConMatch: Semi-Supervised Learning with Confidence-Guided Consistency Regularization

1 code implementation18 Aug 2022 Jiwon Kim, Youngjo Min, Daehwan Kim, Gyuseong Lee, Junyoung Seo, Kwangrok Ryoo, Seungryong Kim

We present a novel semi-supervised learning framework that intelligently leverages the consistency regularization between the model's predictions from two strongly-augmented views of an image, weighted by a confidence of pseudo-label, dubbed ConMatch.

Pseudo Label

Semi-Supervised Learning of Semantic Correspondence with Pseudo-Labels

no code implementations CVPR 2022 Jiwon Kim, Kwangrok Ryoo, Junyoung Seo, Gyuseong Lee, Daehwan Kim, Hansang Cho, Seungryong Kim

In this paper, we present a simple, but effective solution for semantic correspondence that learns the networks in a semi-supervised manner by supplementing few ground-truth correspondences via utilization of a large amount of confident correspondences as pseudo-labels, called SemiMatch.

Data Augmentation Semantic correspondence +1

AggMatch: Aggregating Pseudo Labels for Semi-Supervised Learning

no code implementations25 Jan 2022 Jiwon Kim, Kwangrok Ryoo, Gyuseong Lee, Seokju Cho, Junyoung Seo, Daehwan Kim, Hansang Cho, Seungryong Kim

In this paper, we address this limitation with a novel SSL framework for aggregating pseudo labels, called AggMatch, which refines initial pseudo labels by using different confident instances.

Pseudo Label

Cannot find the paper you are looking for? You can Submit a new open access paper.