Search Results for author: Kibeom Hong

Found 10 papers, 4 papers with code

DreamStyler: Paint by Style Inversion with Text-to-Image Diffusion Models

no code implementations13 Sep 2023 Namhyuk Ahn, Junsoo Lee, Chunggi Lee, Kunhee Kim, Daesik Kim, Seung-Hun Nam, Kibeom Hong

Recent progresses in large-scale text-to-image models have yielded remarkable accomplishments, finding various applications in art domain.

Image Generation Style Transfer

Improving Diversity in Zero-Shot GAN Adaptation with Semantic Variations

no code implementations ICCV 2023 Seogkyu Jeon, Bei Liu, Pilhyeon Lee, Kibeom Hong, Jianlong Fu, Hyeran Byun

Due to the data absence, the textual description of the target domain and the vision-language models, e. g., CLIP, are utilized to effectively guide the generator.

AesPA-Net: Aesthetic Pattern-Aware Style Transfer Networks

1 code implementation ICCV 2023 Kibeom Hong, Seogkyu Jeon, Junsoo Lee, Namhyuk Ahn, Kunhee Kim, Pilhyeon Lee, Daesik Kim, Youngjung Uh, Hyeran Byun

To deliver the artistic expression of the target style, recent studies exploit the attention mechanism owing to its ability to map the local patches of the style image to the corresponding patches of the content image.

Semantic correspondence Style Transfer

DiffBlender: Scalable and Composable Multimodal Text-to-Image Diffusion Models

1 code implementation24 May 2023 Sungnyun Kim, Junsoo Lee, Kibeom Hong, Daesik Kim, Namhyuk Ahn

In this study, we aim to extend the capabilities of diffusion-based text-to-image (T2I) generation models by incorporating diverse modalities beyond textual description, such as sketch, box, color palette, and style embedding, within a single model.

Conditional Image Generation multimodal generation +1

Interactive Cartoonization with Controllable Perceptual Factors

no code implementations CVPR 2023 Namhyuk Ahn, Patrick Kwon, Jihye Back, Kibeom Hong, Seungkwon Kim

In the texture decoder, we propose a texture controller, which enables a user to control stroke style and abstraction to generate diverse cartoon textures.

Translation

Exploiting Domain Transferability for Collaborative Inter-level Domain Adaptive Object Detection

no code implementations20 Jul 2022 Mirae Do, Seogkyu Jeon, Pilhyeon Lee, Kibeom Hong, Yu-seung Ma, Hyeran Byun

Domain adaptation for object detection (DAOD) has recently drawn much attention owing to its capability of detecting target objects without any annotations.

Domain Adaptation Object +3

Feature Stylization and Domain-aware Contrastive Learning for Domain Generalization

1 code implementation19 Aug 2021 Seogkyu Jeon, Kibeom Hong, Pilhyeon Lee, Jewook Lee, Hyeran Byun

To these ends, we propose a novel domain generalization framework where feature statistics are utilized for stylizing original features to ones with novel domain properties.

Contrastive Learning Domain Generalization

Domain-Aware Universal Style Transfer

1 code implementation ICCV 2021 Kibeom Hong, Seogkyu Jeon, Huan Yang, Jianlong Fu, Hyeran Byun

To this end, we design a novel domainness indicator that captures the domainness value from the texture and structural features of reference images.

Style Transfer

Continuous Face Aging Generative Adversarial Networks

no code implementations26 Feb 2021 Seogkyu Jeon, Pilhyeon Lee, Kibeom Hong, Hyeran Byun

Face aging is the task aiming to translate the faces in input images to designated ages.

MORPH

ArrowGAN : Learning to Generate Videos by Learning Arrow of Time

no code implementations11 Jan 2021 Kibeom Hong, Youngjung Uh, Hyeran Byun

Training GANs on videos is even more sophisticated than on images because videos have a distinguished dimension: time.

Conditional Image Generation Video Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.