no code implementations • 5 Dec 2024 • Donghoon Ahn, Jiwon Kang, SangHyun Lee, Jaewon Min, Minjae Kim, Wooseok Jang, Hyoungwon Cho, Sayak Paul, SeonHwa Kim, Eunju Cha, Kyong Hwan Jin, Seungryong Kim
Observing that noise obtained via diffusion inversion can reconstruct high-quality images without guidance, we focus on the initial noise of the denoising pipeline.
no code implementations • 2 Dec 2024 • Wooseok Jang, Youngjun Hong, Geonho Cha, Seungryong Kim
Manipulation of facial images to meet specific controls such as pose, expression, and lighting, also known as face rigging, is a complex task in computer vision.
no code implementations • 23 Aug 2024 • Joonho Lee, JuYoun Son, Juree Seok, Wooseok Jang, Yeong-Dae Kwon
Inconsistent annotations in training corpora, particularly within preference learning datasets, pose challenges in developing advanced language models.
1 code implementation • 10 May 2024 • Joonho Lee, Jae Oh Woo, Juree Seok, Parisa Hassanzadeh, Wooseok Jang, JuYoun Son, Sima Didari, Baruch Gutow, Heng Hao, Hankyu Moon, WenJun Hu, Yeong-Dae Kwon, TaeHee Lee, Seungjai Min
Assessing response quality to instructions in language models is vital but challenging due to the complexity of human language across different contexts.
3 code implementations • 26 Mar 2024 • Donghoon Ahn, Hyoungwon Cho, Jaewon Min, Wooseok Jang, Jungwoo Kim, SeonHwa Kim, Hyun Hee Park, Kyong Hwan Jin, Seungryong Kim
These techniques are often not applicable in unconditional generation or in various downstream tasks such as image restoration.
1 code implementation • 5 Feb 2024 • Junyoung Seo, Susung Hong, Wooseok Jang, Inès Hyeonsu Kim, Minseop Kwak, Doyup Lee, Seungryong Kim
We leverage the retrieved asset to incorporate its geometric prior in the variational objective and adapt the diffusion model's 2D prior toward view consistency, achieving drastic improvements in both geometry and fidelity of generated scenes.
1 code implementation • 17 Oct 2023 • Gyuseong Lee, Wooseok Jang, Jinhyeon Kim, Jaewoo Jung, Seungryong Kim
Our focus in this study is on leveraging the knowledge of large pretrained models to improve handling of OOD scenarios and tackle domain generalization problems.
Ranked #1 on
Domain Generalization
on Office-Home
no code implementations • 5 Jun 2023 • Sunwoo Kim, Wooseok Jang, Hyunsu Kim, Junho Kim, Yunjey Choi, Seungryong Kim, Gayeong Lee
From the users' standpoint, prompt engineering is a labor-intensive process, and users prefer to provide a target word for editing instead of a full sentence.
1 code implementation • 14 Mar 2023 • Junyoung Seo, Wooseok Jang, Min-Seop Kwak, Hyeonsu Kim, Jaehoon Ko, Junho Kim, Jin-Hwa Kim, Jiyoung Lee, Seungryong Kim
Text-to-3D generation has shown rapid progress in recent days with the advent of score distillation, a methodology of using pretrained text-to-2D diffusion models to optimize neural radiance field (NeRF) in the zero-shot setting.
1 code implementation • 17 Dec 2022 • Gyeongnyeon Kim, Wooseok Jang, Gyuseong Lee, Susung Hong, Junyoung Seo, Seungryong Kim
Generative models have recently undergone significant advancement due to the diffusion models.
5 code implementations • ICCV 2023 • Susung Hong, Gyuseong Lee, Wooseok Jang, Seungryong Kim
Denoising diffusion models (DDMs) have attracted attention for their exceptional generation quality and diversity.