Search Results for author: Jisu Nam

Found 7 papers, 7 papers with code

DreamMatcher: Appearance Matching Self-Attention for Semantically-Consistent Text-to-Image Personalization

1 code implementation15 Feb 2024 Jisu Nam, Heesu Kim, Dongjae Lee, Siyoon Jin, Seungryong Kim, Seunggyu Chang

The objective of text-to-image (T2I) personalization is to customize a diffusion model to a user-provided reference concept, generating diverse images of the concept aligned with the target prompts.

Denoising

Diffusion Model for Dense Matching

1 code implementation30 May 2023 Jisu Nam, Gyuseong Lee, Sunwoo Kim, Hyeonsu Kim, Hyoungwon Cho, Seyeon Kim, Seungryong Kim

The objective for establishing dense correspondence between paired images consists of two terms: a data term and a prior term.

Denoising

DiffFace: Diffusion-based Face Swapping with Facial Guidance

1 code implementation27 Dec 2022 Kihong Kim, Yunho Kim, Seokju Cho, Junyoung Seo, Jisu Nam, Kychul Lee, Seungryong Kim, Kwanghee Lee

In this paper, we propose a diffusion-based face swapping framework for the first time, called DiffFace, composed of training ID conditional DDPM, sampling with facial guidance, and a target-preserving blending.

Face Swapping

Neural Matching Fields: Implicit Representation of Matching Fields for Visual Correspondence

1 code implementation6 Oct 2022 Sunghwan Hong, Jisu Nam, Seokju Cho, Susung Hong, Sangryul Jeon, Dongbo Min, Seungryong Kim

Existing pipelines of semantic correspondence commonly include extracting high-level semantic features for the invariance against intra-class variations and background clutters.

Semantic correspondence

Cost Aggregation with 4D Convolutional Swin Transformer for Few-Shot Segmentation

1 code implementation22 Jul 2022 Sunghwan Hong, Seokju Cho, Jisu Nam, Stephen Lin, Seungryong Kim

However, the tokenization of a correlation map for transformer processing can be detrimental, because the discontinuity at token boundaries reduces the local context available near the token edges and decreases inductive bias.

Few-Shot Semantic Segmentation Inductive Bias +1

Cost Aggregation Is All You Need for Few-Shot Segmentation

2 code implementations22 Dec 2021 Sunghwan Hong, Seokju Cho, Jisu Nam, Seungryong Kim

We introduce a novel cost aggregation network, dubbed Volumetric Aggregation with Transformers (VAT), to tackle the few-shot segmentation task by using both convolutions and transformers to efficiently handle high dimensional correlation maps between query and support.

Few-Shot Semantic Segmentation Inductive Bias +2

Cannot find the paper you are looking for? You can Submit a new open access paper.