Search Results for author: Junsoo Lee

Found 13 papers, 6 papers with code

DreamStyler: Paint by Style Inversion with Text-to-Image Diffusion Models

no code implementations13 Sep 2023 Namhyuk Ahn, Junsoo Lee, Chunggi Lee, Kunhee Kim, Daesik Kim, Seung-Hun Nam, Kibeom Hong

Recent progresses in large-scale text-to-image models have yielded remarkable accomplishments, finding various applications in art domain.

Image Generation Style Transfer

AesPA-Net: Aesthetic Pattern-Aware Style Transfer Networks

1 code implementation ICCV 2023 Kibeom Hong, Seogkyu Jeon, Junsoo Lee, Namhyuk Ahn, Kunhee Kim, Pilhyeon Lee, Daesik Kim, Youngjung Uh, Hyeran Byun

To deliver the artistic expression of the target style, recent studies exploit the attention mechanism owing to its ability to map the local patches of the style image to the corresponding patches of the content image.

Semantic correspondence Style Transfer

DiffBlender: Scalable and Composable Multimodal Text-to-Image Diffusion Models

1 code implementation24 May 2023 Sungnyun Kim, Junsoo Lee, Kibeom Hong, Daesik Kim, Namhyuk Ahn

In this study, we aim to extend the capabilities of diffusion-based text-to-image (T2I) generation models by incorporating diverse modalities beyond textual description, such as sketch, box, color palette, and style embedding, within a single model.

Conditional Image Generation multimodal generation +1

LPMM: Intuitive Pose Control for Neural Talking-Head Model via Landmark-Parameter Morphable Model

no code implementations17 May 2023 Kwangho Lee, Patrick Kwon, Myung Ki Lee, Namhyuk Ahn, Junsoo Lee

To enable this, we introduce a landmark-parameter morphable model (LPMM), which offers control over the facial landmark domain through a set of semantic parameters.

Reference-based Image Composition with Sketch via Structure-aware Diffusion Model

1 code implementation31 Mar 2023 Kangyeol Kim, Sunghyun Park, Junsoo Lee, Jaegul Choo

Recent remarkable improvements in large-scale text-to-image generative models have shown promising results in generating high-fidelity images.

Image Manipulation

Guiding Users to Where to Give Color Hints for Efficient Interactive Sketch Colorization via Unsupervised Region Prioritization

no code implementations25 Oct 2022 Youngin Cho, Junsoo Lee, Soyoung Yang, Juntae Kim, Yeojeong Park, Haneol Lee, Mohammad Azam Khan, Daesik Kim, Jaegul Choo

Existing deep interactive colorization models have focused on ways to utilize various types of interactions, such as point-wise color hints, scribbles, or natural-language texts, as methods to reflect a user's intent at runtime.

Colorization Image Colorization

Learning Representations by Contrasting Clusters While Bootstrapping Instances

no code implementations1 Jan 2021 Junsoo Lee, Hojoon Lee, Inkyu Shin, Jaekyoung Bae, In So Kweon, Jaegul Choo

Learning visual representations using large-scale unlabelled images is a holy grail for most of computer vision tasks.

Clustering Contrastive Learning +5

Vid-ODE: Continuous-Time Video Generation with Neural Ordinary Differential Equation

1 code implementation16 Oct 2020 Sunghyun Park, Kangyeol Kim, Junsoo Lee, Jaegul Choo, Joonseok Lee, Sookyung Kim, Edward Choi

Video generation models often operate under the assumption of fixed frame rates, which leads to suboptimal performance when it comes to handling flexible frame rates (e. g., increasing the frame rate of the more dynamic portion of the video as well as handling missing video frames).

Decoder Video Generation

Reference-Based Sketch Image Colorization using Augmented-Self Reference and Dense Semantic Correspondence

no code implementations CVPR 2020 Junsoo Lee, Eungyeup Kim, Yunsung Lee, Dongjun Kim, Jaehyuk Chang, Jaegul Choo

However, it is difficult to prepare for a training data set that has a sufficient amount of semantically meaningful pairs of images as well as the ground truth for a colored image reflecting a given reference (e. g., coloring a sketch of an originally blue car given a reference green car).

Colorization Image Colorization +1

Coloring With Limited Data: Few-Shot Colorization via Memory-Augmented Networks

1 code implementation9 Jun 2019 Seungjoo Yoo, Hyojin Bahng, Sunghyo Chung, Junsoo Lee, Jaehyuk Chang, Jaegul Choo

Despite recent advancements in deep learning-based automatic colorization, they are still limited when it comes to few-shot learning.

Colorization Few-Shot Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.