no code implementations • 2 Apr 2024 • Donghoon Han, Seunghyeon Seo, Eunhwan Park, Seong-Uk Nam, Nojun Kwak
Multimodal and large language models (LLMs) have revolutionized the utilization of open-world knowledge, unlocking novel potentials across various tasks and applications.
Ranked #1 on Highlight Detection on QVHighlights
no code implementations • 22 Aug 2023 • Donghoon Han, Seunghyeon Seo, Donghyeon Jeon, Jiho Jang, Chaerin Kong, Nojun Kwak
Transformers have demonstrated tremendous success not only in the natural language processing (NLP) domain but also the field of computer vision, igniting various creative approaches and applications.
1 code implementation • CVPR 2023 • Seunghyeon Seo, Donghoon Han, Yeonjin Chang, Nojun Kwak
In this work, we propose MixNeRF, an effective training strategy for novel view synthesis from sparse inputs by modeling a ray with a mixture density model.
3 code implementations • 23 Nov 2021 • Chaerin Kong, Jeesoo Kim, Donghoon Han, Nojun Kwak
Producing diverse and realistic images with generative models such as GANs typically requires large scale training with vast amount of images.
no code implementations • 17 Mar 2021 • Saem Park, Donghoon Han, Nojun Kwak
Through experiments, we \sam {confirmed the feasibility of the proposed algorithm and would like to suggest the U-Net based Generative Flow as a new possibility for baseline in video frame interpolation.