no code implementations • 17 Jul 2024 • Yong-Hyun Park, Sangdoo Yun, Jin-Hwa Kim, Junho Kim, Geonhui Jang, Yonghyun Jeong, Junghyo Jo, Gayoung Lee
In this paper, we propose Direct Unlearning Optimization (DUO), a novel framework for removing Not Safe For Work (NSFW) content from T2I models while preserving their performance on unrelated topics.
no code implementations • 17 Jul 2024 • Yong-Hyun Park, Junghoon Seo, Bomseok Park, Seongsu Lee, Junghyo Jo
Identifying the relevant input features that have a critical influence on the output results is indispensable for the development of explainable artificial intelligence (XAI).
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
no code implementations • 2 Apr 2024 • Juno Hwang, Yong-Hyun Park, Junghyo Jo
We demonstrate that upsample guidance can be applied to various models, such as pixel-space, latent space, and video diffusion models.
no code implementations • 7 Dec 2023 • Juno Hwang, Yong-Hyun Park, Junghyo Jo
In this paper, we introduce "resolution chromatography" that indicates the signal generation rate of each resolution, which is very helpful concept to mathematically explain this coarse-to-fine behavior in generation process, to understand the role of noise schedule, and to design time-dependent modulation.
1 code implementation • NeurIPS 2023 • Yong-Hyun Park, Mingi Kwon, Jaewoong Choi, Junghyo Jo, Youngjung Uh
Remarkably, our discovered local latent basis enables image editing capabilities by moving $\mathbf{x}_t$, the latent space of DMs, along the basis vector at specific timesteps.
no code implementations • 24 Feb 2023 • Yong-Hyun Park, Mingi Kwon, Junghyo Jo, Youngjung Uh
Despite the success of diffusion models (DMs), we still lack a thorough understanding of their latent space.