Search Results for author: Namhyuk Ahn

Found 18 papers, 10 papers with code

Imperceptible Protection against Style Imitation from Diffusion Models

no code implementations28 Mar 2024 Namhyuk Ahn, Wonhyuk Ahn, KiYoon Yoo, Daesik Kim, Seung-Hun Nam

Recent progress in diffusion models has profoundly enhanced the fidelity of image generation.

DreamStyler: Paint by Style Inversion with Text-to-Image Diffusion Models

no code implementations13 Sep 2023 Namhyuk Ahn, Junsoo Lee, Chunggi Lee, Kunhee Kim, Daesik Kim, Seung-Hun Nam, Kibeom Hong

Recent progresses in large-scale text-to-image models have yielded remarkable accomplishments, finding various applications in art domain.

Image Generation Style Transfer

AesPA-Net: Aesthetic Pattern-Aware Style Transfer Networks

1 code implementation ICCV 2023 Kibeom Hong, Seogkyu Jeon, Junsoo Lee, Namhyuk Ahn, Kunhee Kim, Pilhyeon Lee, Daesik Kim, Youngjung Uh, Hyeran Byun

To deliver the artistic expression of the target style, recent studies exploit the attention mechanism owing to its ability to map the local patches of the style image to the corresponding patches of the content image.

Semantic correspondence Style Transfer

Magnitude Attention-based Dynamic Pruning

no code implementations8 Jun 2023 Jihye Back, Namhyuk Ahn, Jangho Kim

Existing pruning methods utilize the importance of each weight based on specified criteria only when searching for a sparse structure but do not utilize it during training.

Efficient Exploration

DiffBlender: Scalable and Composable Multimodal Text-to-Image Diffusion Models

1 code implementation24 May 2023 Sungnyun Kim, Junsoo Lee, Kibeom Hong, Daesik Kim, Namhyuk Ahn

In this study, we aim to extend the capabilities of diffusion-based text-to-image (T2I) generation models by incorporating diverse modalities beyond textual description, such as sketch, box, color palette, and style embedding, within a single model.

Conditional Image Generation multimodal generation +1

LPMM: Intuitive Pose Control for Neural Talking-Head Model via Landmark-Parameter Morphable Model

no code implementations17 May 2023 Kwangho Lee, Patrick Kwon, Myung Ki Lee, Namhyuk Ahn, Junsoo Lee

To enable this, we introduce a landmark-parameter morphable model (LPMM), which offers control over the facial landmark domain through a set of semantic parameters.

Interactive Cartoonization with Controllable Perceptual Factors

no code implementations CVPR 2023 Namhyuk Ahn, Patrick Kwon, Jihye Back, Kibeom Hong, Seungkwon Kim

In the texture decoder, we propose a texture controller, which enables a user to control stroke style and abstraction to generate diverse cartoon textures.

Translation

WebtoonMe: A Data-Centric Approach for Full-Body Portrait Stylization

1 code implementation19 Oct 2022 Jihye Back, Seungkwon Kim, Namhyuk Ahn

Full-body portrait stylization, which aims to translate portrait photography into a cartoon style, has drawn attention recently.

Decomposing Texture and Semantics for Out-of-distribution Detection

no code implementations29 Sep 2021 Jeong-Hyeon Moon, Namhyuk Ahn, Kyung-Ah Sohn

Out-of-distribution (OOD) detection has made significant progress in recent years because the distribution mismatch between the training and testing can severely deteriorate the reliability of a machine learning system. Nevertheless, the lack of precise interpretation of the in-distribution limits the application of OOD detection methods to real-world system pipielines.

Out-of-Distribution Detection Out of Distribution (OOD) Detection +1

What is Wrong with One-Class Anomaly Detection?

1 code implementation20 Apr 2021 JuneKyu Park, Jeong-Hyeon Moon, Namhyuk Ahn, Kyung-Ah Sohn

From a safety perspective, a machine learning method embedded in real-world applications is required to distinguish irregular situations.

Anomaly Detection

Restoring Spatially-Heterogeneous Distortions using Mixture of Experts Network

1 code implementation30 Sep 2020 Sijin Kim, Namhyuk Ahn, Kyung-Ah Sohn

Viewing in a different point of combining, we introduce a spatially-heterogeneous distortion dataset in which multiple corruptions are applied to the different locations of each image.

Multi-Task Learning

SimUSR: A Simple but Strong Baseline for Unsupervised Image Super-resolution

no code implementations23 Apr 2020 Namhyuk Ahn, Jaejun Yoo, Kyung-Ah Sohn

In this paper, we tackle a fully unsupervised super-resolution problem, i. e., neither paired images nor ground truth HR images.

Denoising Image Super-Resolution +1

Efficient Deep Neural Network for Photo-realistic Image Super-Resolution

1 code implementation6 Mar 2019 Namhyuk Ahn, Byungkon Kang, Kyung-Ah Sohn

Recent progress in deep learning-based models has improved photo-realistic (or perceptual) single-image super-resolution significantly.

Image Super-Resolution

Cannot find the paper you are looking for? You can Submit a new open access paper.