Search Results for author: Gihyun Kwon

Found 15 papers, 10 papers with code

Concept Weaver: Enabling Multi-Concept Fusion in Text-to-Image Models

no code implementations5 Apr 2024 Gihyun Kwon, Simon Jenni, DIngzeyu Li, Joon-Young Lee, Jong Chul Ye, Fabian Caba Heilbron

While there has been significant progress in customizing text-to-image generation models, generating images that combine multiple personalized concepts remains challenging.

Text-to-Image Generation

Patch-wise Graph Contrastive Learning for Image Translation

1 code implementation13 Dec 2023 Chanyong Jung, Gihyun Kwon, Jong Chul Ye

Recently, patch-wise contrastive learning is drawing attention for the image translation by exploring the semantic correspondence between the input and output images.

Contrastive Learning Semantic correspondence +1

Contrastive Denoising Score for Text-guided Latent Diffusion Image Editing

no code implementations30 Nov 2023 Hyelin Nam, Gihyun Kwon, Geon Yeong Park, Jong Chul Ye

A promising recent approach in this realm is Delta Denoising Score (DDS) - an image editing technique based on Score Distillation Sampling (SDS) framework that leverages the rich generative prior of text-to-image diffusion models.

Contrastive Learning Denoising +2

ED-NeRF: Efficient Text-Guided Editing of 3D Scene with Latent Space NeRF

no code implementations4 Oct 2023 Jangho Park, Gihyun Kwon, Jong Chul Ye

These advancements have been extended to 3D models, enabling the generation of novel 3D objects from textual descriptions.

Denoising Image Generation

Unpaired Image-to-Image Translation via Neural Schrödinger Bridge

1 code implementation24 May 2023 Beomsu Kim, Gihyun Kwon, Kwanyoung Kim, Jong Chul Ye

Diffusion models are a powerful class of generative models which simulate stochastic differential equations (SDEs) to generate data from noise.

Image-to-Image Translation Translation

Zero-shot Generation of Coherent Storybook from Plain Text Story using Diffusion Models

no code implementations8 Feb 2023 Hyeonho Jeong, Gihyun Kwon, Jong Chul Ye

Recent advancements in large scale text-to-image models have opened new possibilities for guiding the creation of images through human-devised natural language.

Language Modelling Large Language Model

Diffusion-based Image Translation using Disentangled Style and Content Representation

1 code implementation30 Sep 2022 Gihyun Kwon, Jong Chul Ye

Diffusion-based image translation guided by semantic texts or a single target image has enabled flexible style transfer which is not limited to the specific domains.

Style Transfer Translation

One-Shot Adaptation of GAN in Just One CLIP

3 code implementations17 Mar 2022 Gihyun Kwon, Jong Chul Ye

Specifically, our model employs a two-step training strategy: reference image search in the source generator using a CLIP-guided latent optimization, followed by generator fine-tuning with a novel loss function that imposes CLIP space consistency between the source and adapted generators.

Attribute Image Retrieval

Exploring Patch-wise Semantic Relation for Contrastive Learning in Image-to-Image Translation Tasks

1 code implementation CVPR 2022 Chanyong Jung, Gihyun Kwon, Jong Chul Ye

Recently, contrastive learning-based image translation methods have been proposed, which contrasts different spatial locations to enhance the spatial correspondence.

Contrastive Learning Image-to-Image Translation +2

CLIPstyler: Image Style Transfer with a Single Text Condition

3 code implementations CVPR 2022 Gihyun Kwon, Jong Chul Ye

In order to deal with such applications, we propose a new framework that enables a style transfer `without' a style image, but only with a text description of the desired style.

Style Transfer

Diagonal Attention and Style-based GAN for Content-Style Disentanglement in Image Generation and Translation

1 code implementation ICCV 2021 Gihyun Kwon, Jong Chul Ye

One of the important research topics in image generative models is to disentangle the spatial contents and styles for their separate control.

Disentanglement Image Generation +1

Progressive Face Super-Resolution via Attention to Facial Landmark

1 code implementation22 Aug 2019 Deokyun Kim, Minseon Kim, Gihyun Kwon, Dae-shik Kim

Face Super-Resolution (SR) is a subfield of the SR domain that specifically targets the reconstruction of face images.

Face Alignment Super-Resolution

Generation of 3D Brain MRI Using Auto-Encoding Generative Adversarial Networks

3 code implementations7 Aug 2019 Gihyun Kwon, Chihye Han, Dae-shik Kim

We also train the model to synthesize brain disorder MRI data to demonstrate the wide applicability of our model.

Image-to-Image Translation

Representation of White- and Black-Box Adversarial Examples in Deep Neural Networks and Humans: A Functional Magnetic Resonance Imaging Study

no code implementations7 May 2019 Chihye Han, Wonjun Yoon, Gihyun Kwon, Seungkyu Nam, Dae-shik Kim

However, DNNs exhibit idiosyncrasies that suggest their visual representation and processing might be substantially different from human vision.

Cannot find the paper you are looking for? You can Submit a new open access paper.