Search Results for author: Hyung Il Koo

Found 7 papers, 5 papers with code

Eta Inversion: Designing an Optimal Eta Function for Diffusion-based Real Image Editing

1 code implementation14 Mar 2024 Wonjun Kang, Kevin Galim, Hyung Il Koo

Diffusion models have achieved remarkable success in the domain of text-guided image generation and, more recently, in text-guided image editing.

Image Generation text-guided-image-editing

Can MLLMs Perform Text-to-Image In-Context Learning?

1 code implementation2 Feb 2024 Yuchen Zeng, Wonjun Kang, Yicong Chen, Hyung Il Koo, Kangwook Lee

The evolution from Large Language Models (LLMs) to Multimodal Large Language Models (MLLMs) has spurred research into extending In-Context Learning (ICL) to its multimodal counterpart.

Image Generation Image to text +1

Counting Guidance for High Fidelity Text-to-Image Synthesis

no code implementations30 Jun 2023 Wonjun Kang, Kevin Galim, Hyung Il Koo, Nam Ik Cho

In this paper, we present a method to improve diffusion models so that they accurately produce the correct object count based on the input prompt.

Denoising Object +1

One-Shot Face Reenactment on Megapixels

no code implementations26 May 2022 Wonjun Kang, Geonsu Lee, Hyung Il Koo, Nam Ik Cho

The goal of face reenactment is to transfer a target expression and head pose to a source face while preserving the source identity.

Face Reenactment Facial Editing +2

Cannot find the paper you are looking for? You can Submit a new open access paper.