no code implementations • 5 Apr 2024 • Gihyun Kwon, Simon Jenni, DIngzeyu Li, Joon-Young Lee, Jong Chul Ye, Fabian Caba Heilbron
While there has been significant progress in customizing text-to-image generation models, generating images that combine multiple personalized concepts remains challenging.
1 code implementation • 13 Dec 2023 • Chanyong Jung, Gihyun Kwon, Jong Chul Ye
Recently, patch-wise contrastive learning is drawing attention for the image translation by exploring the semantic correspondence between the input and output images.
no code implementations • 30 Nov 2023 • Hyelin Nam, Gihyun Kwon, Geon Yeong Park, Jong Chul Ye
A promising recent approach in this realm is Delta Denoising Score (DDS) - an image editing technique based on Score Distillation Sampling (SDS) framework that leverages the rich generative prior of text-to-image diffusion models.
no code implementations • 4 Oct 2023 • Jangho Park, Gihyun Kwon, Jong Chul Ye
These advancements have been extended to 3D models, enabling the generation of novel 3D objects from textual descriptions.
1 code implementation • 7 Jun 2023 • Gihyun Kwon, Jong Chul Ye
Diffusion models have shown significant progress in image translation tasks recently.
1 code implementation • 24 May 2023 • Beomsu Kim, Gihyun Kwon, Kwanyoung Kim, Jong Chul Ye
Diffusion models are a powerful class of generative models which simulate stochastic differential equations (SDEs) to generate data from noise.
no code implementations • 8 Feb 2023 • Hyeonho Jeong, Gihyun Kwon, Jong Chul Ye
Recent advancements in large scale text-to-image models have opened new possibilities for guiding the creation of images through human-devised natural language.
1 code implementation • 30 Sep 2022 • Gihyun Kwon, Jong Chul Ye
Diffusion-based image translation guided by semantic texts or a single target image has enabled flexible style transfer which is not limited to the specific domains.
3 code implementations • 17 Mar 2022 • Gihyun Kwon, Jong Chul Ye
Specifically, our model employs a two-step training strategy: reference image search in the source generator using a CLIP-guided latent optimization, followed by generator fine-tuning with a novel loss function that imposes CLIP space consistency between the source and adapted generators.
1 code implementation • CVPR 2022 • Chanyong Jung, Gihyun Kwon, Jong Chul Ye
Recently, contrastive learning-based image translation methods have been proposed, which contrasts different spatial locations to enhance the spatial correspondence.
3 code implementations • CVPR 2022 • Gihyun Kwon, Jong Chul Ye
In order to deal with such applications, we propose a new framework that enables a style transfer `without' a style image, but only with a text description of the desired style.
1 code implementation • ICCV 2021 • Gihyun Kwon, Jong Chul Ye
One of the important research topics in image generative models is to disentangle the spatial contents and styles for their separate control.
1 code implementation • 22 Aug 2019 • Deokyun Kim, Minseon Kim, Gihyun Kwon, Dae-shik Kim
Face Super-Resolution (SR) is a subfield of the SR domain that specifically targets the reconstruction of face images.
Ranked #1 on Face Alignment on CelebA + AFLW Unaligned
3 code implementations • 7 Aug 2019 • Gihyun Kwon, Chihye Han, Dae-shik Kim
We also train the model to synthesize brain disorder MRI data to demonstrate the wide applicability of our model.
no code implementations • 7 May 2019 • Chihye Han, Wonjun Yoon, Gihyun Kwon, Seungkyu Nam, Dae-shik Kim
However, DNNs exhibit idiosyncrasies that suggest their visual representation and processing might be substantially different from human vision.