Search Results for author: Hyung Il Koo

Found 5 papers, 3 papers with code

Eta Inversion: Designing an Optimal Eta Function for Diffusion-based Real Image Editing

1 code implementation14 Mar 2024 Wonjun Kang, Kevin Galim, Hyung Il Koo

A commonly adopted strategy for editing real images involves inverting the diffusion process to obtain a noisy representation of the original image, which is then denoised to achieve the desired edits.

Image Generation text-guided-image-editing

Can MLLMs Perform Text-to-Image In-Context Learning?

1 code implementation2 Feb 2024 Yuchen Zeng, Wonjun Kang, Yicong Chen, Hyung Il Koo, Kangwook Lee

The evolution from Large Language Models (LLMs) to Multimodal Large Language Models (MLLMs) has spurred research into extending In-Context Learning (ICL) to its multimodal counterpart.

Image Generation In-Context Learning

Counting Guidance for High Fidelity Text-to-Image Synthesis

no code implementations30 Jun 2023 Wonjun Kang, Kevin Galim, Hyung Il Koo

In this paper, we propose a method to improve diffusion models to focus on producing the correct object count given the input prompt.

Denoising Object +1

One-Shot Face Reenactment on Megapixels

no code implementations26 May 2022 Wonjun Kang, Geonsu Lee, Hyung Il Koo, Nam Ik Cho

The goal of face reenactment is to transfer a target expression and head pose to a source face while preserving the source identity.

Face Reenactment Facial Editing +2

Cannot find the paper you are looking for? You can Submit a new open access paper.