Search Results for author: Gayoung Lee

Found 9 papers, 5 papers with code

Visual Style Prompting with Swapping Self-Attention

1 code implementation20 Feb 2024 Jaeseok Jeong, Junho Kim, Yunjey Choi, Gayoung Lee, Youngjung Uh

Despite their remarkable capability, existing models still face challenges in achieving controlled generation with a consistent style, requiring costly fine-tuning or often inadequately transferring the visual elements due to content leakage.

Denoising Style Transfer +1

Sequential Data Generation with Groupwise Diffusion Process

no code implementations2 Oct 2023 Sangyun Lee, Gayoung Lee, Hyunsu Kim, Junho Kim, Youngjung Uh

We present the Groupwise Diffusion Model (GDM), which divides data into multiple groups and diffuses one group at one time interval in the forward diffusion process.

Disentanglement

Generator Knows What Discriminator Should Learn in Unconditional GANs

1 code implementation27 Jul 2022 Gayoung Lee, Hyunsu Kim, Junho Kim, Seonghyeon Kim, Jung-Woo Ha, Yunjey Choi

Here we explore the efficacy of dense supervision in unconditional generation and find generator feature maps can be an alternative of cost-expensive semantic label maps.

Conditional Image Generation Unconditional Image Generation

Memory Efficient Patch-based Training for INR-based GANs

no code implementations4 Jul 2022 Namwoo Lee, Hyunsu Kim, Gayoung Lee, Sungjoo Yoo, Yunjey Choi

However, training existing approaches require a heavy computational cost proportional to the image resolution, since they compute an MLP operation for every (x, y) coordinate.

Image Outpainting Super-Resolution

RewriteNet: Reliable Scene Text Editing with Implicit Decomposition of Text Contents and Styles

no code implementations23 Jul 2021 Junyeop Lee, Yoonsik Kim, Seonghyeon Kim, Moonbin Yim, Seung Shin, Gayoung Lee, Sungrae Park

Scene text editing (STE), which converts a text in a scene image into the desired text while preserving an original style, is a challenging task due to a complex intervention between text and style.

Image Generation Scene Text Editing +1

Few-shot Compositional Font Generation with Dual Memory

3 code implementations ECCV 2020 Junbum Cha, Sanghyuk Chun, Gayoung Lee, Bado Lee, Seonghyeon Kim, Hwalsuk Lee

By utilizing the compositionality of compositional scripts, we propose a novel font generation framework, named Dual Memory-augmented Font Generation Network (DM-Font), which enables us to generate a high-quality font library with only a few samples.

Font Generation

NSML: A Machine Learning Platform That Enables You to Focus on Your Models

no code implementations16 Dec 2017 Nako Sung, Minkyu Kim, Hyunwoo Jo, Youngil Yang, Jingwoong Kim, Leonard Lausen, Youngkwan Kim, Gayoung Lee, Dong-Hyun Kwak, Jung-Woo Ha, Sunghun Kim

However, researchers are still required to perform a non-trivial amount of manual tasks such as GPU allocation, training status tracking, and comparison of models with different hyperparameter settings.

BIG-bench Machine Learning

Deep Saliency with Encoded Low level Distance Map and High Level Features

2 code implementations CVPR 2016 Gayoung Lee, Yu-Wing Tai, Junmo Kim

Recent advances in saliency detection have utilized deep learning to obtain high level features to detect salient regions in a scene.

Saliency Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.