Search Results for author: Minguk Kang

Found 9 papers, 5 papers with code

Distilling Diffusion Models into Conditional GANs

no code implementations9 May 2024 Minguk Kang, Richard Zhang, Connelly Barnes, Sylvain Paris, Suha Kwak, Jaesik Park, Eli Shechtman, Jun-Yan Zhu, Taesung Park

We propose a method to distill a complex multistep diffusion model into a single-step conditional GAN student model, dramatically accelerating inference, while preserving image quality.

Image-to-Image Translation

Extending CLIP's Image-Text Alignment to Referring Image Segmentation

no code implementations14 Jun 2023 Seoyeon Kim, Minguk Kang, Dongwon Kim, Jaesik Park, Suha Kwak

Referring Image Segmentation (RIS) is a cross-modal task that aims to segment an instance described by a natural language expression.

Image Segmentation Referring Expression Segmentation +2

Fill-Up: Balancing Long-Tailed Data with Generative Models

no code implementations12 Jun 2023 Joonghyuk Shin, Minguk Kang, Jaesik Park

Modern text-to-image synthesis models have achieved an exceptional level of photorealism, generating high-quality images from arbitrary text descriptions.

Image Generation

Scaling up GANs for Text-to-Image Synthesis

1 code implementation CVPR 2023 Minguk Kang, Jun-Yan Zhu, Richard Zhang, Jaesik Park, Eli Shechtman, Sylvain Paris, Taesung Park

From a technical standpoint, it also marked a drastic change in the favored architecture to design generative image models.

Text-to-Image Generation

Instance-Aware Image Completion

no code implementations22 Oct 2022 Jinoh Cho, Minguk Kang, Vibhav Vineet, Jaesik Park

However, existing image completion methods tend to fill in the missing region with the surrounding texture instead of hallucinating a visual instance that is suitable in accordance with the context of the scene.

Image Generation object-detection +2

StudioGAN: A Taxonomy and Benchmark of GANs for Image Synthesis

2 code implementations19 Jun 2022 Minguk Kang, Joonghyuk Shin, Jaesik Park

Generative Adversarial Network (GAN) is one of the state-of-the-art generative models for realistic image synthesis.

Generative Adversarial Network Image Generation

ContraGAN: Contrastive Learning for Conditional Image Generation

1 code implementation NeurIPS 2020 Minguk Kang, Jaesik Park

The discriminator of ContraGAN discriminates the authenticity of given samples and minimizes a contrastive objective to learn the relations between training images.

Conditional Image Generation Contrastive Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.