1 code implementation • 5 Mar 2025 • Wonjun Kang, Kevin Galim, Yuchen Zeng, Minjae Lee, Hyung Il Koo, Nam Ik Cho
At every timestep, our method directly affects the state at the current step, leading to more effective adaptation.
1 code implementation • 11 Oct 2024 • Kevin Galim, Wonjun Kang, Yuchen Zeng, Hyung Il Koo, Kangwook Lee
In contrast, LoRA remains effective for SSM-based models.
1 code implementation • 14 Mar 2024 • Wonjun Kang, Kevin Galim, Hyung Il Koo
Diffusion models have achieved remarkable success in the domain of text-guided image generation and, more recently, in text-guided image editing.
1 code implementation • 2 Feb 2024 • Yuchen Zeng, Wonjun Kang, Yicong Chen, Hyung Il Koo, Kangwook Lee
The evolution from Large Language Models (LLMs) to Multimodal Large Language Models (MLLMs) has spurred research into extending In-Context Learning (ICL) to its multimodal counterpart.
no code implementations • 30 Jun 2023 • Wonjun Kang, Kevin Galim, Hyung Il Koo, Nam Ik Cho
In this paper, we present a method to improve diffusion models so that they accurately produce the correct object count based on the input prompt.
no code implementations • 26 May 2022 • Wonjun Kang, Geonsu Lee, Hyung Il Koo, Nam Ik Cho
The goal of face reenactment is to transfer a target expression and head pose to a source face while preserving the source identity.
1 code implementation • 12 Jun 2019 • Junho Jo, Hyung Il Koo, Jae Woong Soh, Nam Ik Cho
For training our network, we develop a cross-entropy based loss function that addresses the imbalance problems.